Test Report: Docker_Linux_crio 21974

                    
                      4cf3e568bd19aa010164d0f2afa2e28844e6f351:2025-11-26:42526
                    
                

Test fail (37/328)

Order failed test Duration
29 TestAddons/serial/Volcano 0.24
35 TestAddons/parallel/Registry 13.53
36 TestAddons/parallel/RegistryCreds 0.38
37 TestAddons/parallel/Ingress 146.62
38 TestAddons/parallel/InspektorGadget 5.46
39 TestAddons/parallel/MetricsServer 5.3
41 TestAddons/parallel/CSI 40.95
42 TestAddons/parallel/Headlamp 2.62
43 TestAddons/parallel/CloudSpanner 5.24
44 TestAddons/parallel/LocalPath 9.07
45 TestAddons/parallel/NvidiaDevicePlugin 5.24
46 TestAddons/parallel/Yakd 6.23
47 TestAddons/parallel/AmdGpuDevicePlugin 6.24
97 TestFunctional/parallel/ServiceCmdConnect 602.68
114 TestFunctional/parallel/ServiceCmd/DeployApp 600.62
129 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.97
132 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.87
136 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.7
137 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.3
139 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.19
140 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.33
152 TestFunctional/parallel/ServiceCmd/HTTPS 0.52
153 TestFunctional/parallel/ServiceCmd/Format 0.52
154 TestFunctional/parallel/ServiceCmd/URL 0.52
191 TestJSONOutput/pause/Command 2.41
197 TestJSONOutput/unpause/Command 2.12
274 TestPause/serial/Pause 6.28
296 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.19
303 TestStartStop/group/old-k8s-version/serial/Pause 5.77
315 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.04
317 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.13
322 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.09
329 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.45
333 TestStartStop/group/newest-cni/serial/Pause 6.19
342 TestStartStop/group/no-preload/serial/Pause 6.36
345 TestStartStop/group/embed-certs/serial/Pause 6.19
356 TestStartStop/group/default-k8s-diff-port/serial/Pause 7.07
x
+
TestAddons/serial/Volcano (0.24s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-368879 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-368879 addons disable volcano --alsologtostderr -v=1: exit status 11 (242.212197ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1126 19:37:06.234915   23603 out.go:360] Setting OutFile to fd 1 ...
	I1126 19:37:06.235233   23603 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:37:06.235244   23603 out.go:374] Setting ErrFile to fd 2...
	I1126 19:37:06.235248   23603 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:37:06.235439   23603 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-10722/.minikube/bin
	I1126 19:37:06.235664   23603 mustload.go:66] Loading cluster: addons-368879
	I1126 19:37:06.235992   23603 config.go:182] Loaded profile config "addons-368879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:37:06.236011   23603 addons.go:622] checking whether the cluster is paused
	I1126 19:37:06.236088   23603 config.go:182] Loaded profile config "addons-368879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:37:06.236103   23603 host.go:66] Checking if "addons-368879" exists ...
	I1126 19:37:06.236446   23603 cli_runner.go:164] Run: docker container inspect addons-368879 --format={{.State.Status}}
	I1126 19:37:06.254434   23603 ssh_runner.go:195] Run: systemctl --version
	I1126 19:37:06.254494   23603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-368879
	I1126 19:37:06.271511   23603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/addons-368879/id_rsa Username:docker}
	I1126 19:37:06.367681   23603 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 19:37:06.367746   23603 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 19:37:06.395019   23603 cri.go:89] found id: "cff11b25559308e517bef61e7e64b2cf1d052d365ba751329767d36a353b778f"
	I1126 19:37:06.395042   23603 cri.go:89] found id: "212fd16c128eb496d22ee7050fd12a081b49ab88a5da4ad7ba0218f5c3a6afda"
	I1126 19:37:06.395049   23603 cri.go:89] found id: "9ad67d88c52b2a7f9372f1150442ba3a50bb8cf2680525e52a8619e4b2c8976b"
	I1126 19:37:06.395053   23603 cri.go:89] found id: "36ba93a3abe1946fafeb5c3585ccba2c0309402b73aa6b93fad1679ab9b8230c"
	I1126 19:37:06.395056   23603 cri.go:89] found id: "4197c12bab9b39cb37fcf577883d45acb6708a76bacaf452b81bbbac83ebc32d"
	I1126 19:37:06.395060   23603 cri.go:89] found id: "714962454fedbffb52b88389444757a4051639c470d89a7391a7d4f8507cd1ef"
	I1126 19:37:06.395063   23603 cri.go:89] found id: "603eac3a5db355f206ba3dc4ff6ed41cd4d2d57dd344b753ba2b220aea0ea935"
	I1126 19:37:06.395066   23603 cri.go:89] found id: "d97dad65e0c228f566bf65943a4547da6e571b1a6529c74a8ae4fca908aff071"
	I1126 19:37:06.395069   23603 cri.go:89] found id: "e825b2f37651c5a80788a75b90cfff51ac5bb5898b783c64714a77a867289740"
	I1126 19:37:06.395074   23603 cri.go:89] found id: "7dfc385a20d465a35afa574dc1af5a49eb3c8c1fa7ed51ee776a301a59c2361d"
	I1126 19:37:06.395077   23603 cri.go:89] found id: "47283ac77595bb9d7a2a45c36475aa798bdd174eb0fbf75c0f29ca447edce13c"
	I1126 19:37:06.395080   23603 cri.go:89] found id: "5a22bb7b95033629d4ca4f1e2bcfe31205813495c016ac6b92a54b3988f5aeb1"
	I1126 19:37:06.395082   23603 cri.go:89] found id: "efa536fb3d778fa3f66c962d9c414995c88acf53c34702183c66e5f69d36811f"
	I1126 19:37:06.395085   23603 cri.go:89] found id: "65470a1503151ad0a4d04f74207c75c03e574621ecbe88aa80a882447651e9dd"
	I1126 19:37:06.395088   23603 cri.go:89] found id: "eb50e6ab9debfa0a5504acf11c706fb150d3db77f0f345b4ad749fda6185bc6e"
	I1126 19:37:06.395093   23603 cri.go:89] found id: "719640c6c4cf6714ce19314de9295329020ce388e99867709ed34e6066f0a048"
	I1126 19:37:06.395107   23603 cri.go:89] found id: "d64fe5dcd9941dd86a15e1a24ecd4bcd3f1c12d60fa89d08729cb02ae1f3a859"
	I1126 19:37:06.395114   23603 cri.go:89] found id: "25e48df5dfb4bded98bc722a69d41cfb23ba0789fcd742e1bbb988ed73575038"
	I1126 19:37:06.395119   23603 cri.go:89] found id: "59ceeea3b62d8764e5784e2b3a3de748607f552e64062869f959f25d6d89b19c"
	I1126 19:37:06.395128   23603 cri.go:89] found id: "c71770537fdbd023344c0899f0ec3b332c10bdffa104f53bcb64a05e3456fdff"
	I1126 19:37:06.395133   23603 cri.go:89] found id: "6d9b40c465aff763d4d0644216d13f8019e5ff223b6cd4d430832c2712521310"
	I1126 19:37:06.395140   23603 cri.go:89] found id: "00f8c7ca3495a11998b111688472e6da4d0e62908b4fe1c119e083354e98f6e7"
	I1126 19:37:06.395144   23603 cri.go:89] found id: "f7ace30aee7afbfe85c6ffe68e9b951559babdf0fc3cfb0b08e0ecaba1352160"
	I1126 19:37:06.395151   23603 cri.go:89] found id: "beecb43fac96b0dc52fe3fc4694c61a44626abc7a7d215655c08387e375b3e65"
	I1126 19:37:06.395155   23603 cri.go:89] found id: ""
	I1126 19:37:06.395192   23603 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 19:37:06.408528   23603 out.go:203] 
	W1126 19:37:06.409598   23603 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:37:06Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:37:06Z" level=error msg="open /run/runc: no such file or directory"
	
	W1126 19:37:06.409612   23603 out.go:285] * 
	* 
	W1126 19:37:06.413196   23603 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1126 19:37:06.414472   23603 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-368879 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.24s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.53s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 3.05538ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-4kzdl" [09b5cb7a-fb98-4680-a679-8c1716e1f038] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002996557s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-lcrcc" [62d90aca-8e64-4089-83fd-bc39957164ba] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.002626682s
addons_test.go:392: (dbg) Run:  kubectl --context addons-368879 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-368879 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-368879 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.097867713s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-368879 ip
2025/11/26 19:37:28 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-368879 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-368879 addons disable registry --alsologtostderr -v=1: exit status 11 (231.150229ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1126 19:37:28.567100   25905 out.go:360] Setting OutFile to fd 1 ...
	I1126 19:37:28.567326   25905 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:37:28.567334   25905 out.go:374] Setting ErrFile to fd 2...
	I1126 19:37:28.567338   25905 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:37:28.567516   25905 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-10722/.minikube/bin
	I1126 19:37:28.567734   25905 mustload.go:66] Loading cluster: addons-368879
	I1126 19:37:28.568031   25905 config.go:182] Loaded profile config "addons-368879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:37:28.568048   25905 addons.go:622] checking whether the cluster is paused
	I1126 19:37:28.568123   25905 config.go:182] Loaded profile config "addons-368879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:37:28.568137   25905 host.go:66] Checking if "addons-368879" exists ...
	I1126 19:37:28.568479   25905 cli_runner.go:164] Run: docker container inspect addons-368879 --format={{.State.Status}}
	I1126 19:37:28.585517   25905 ssh_runner.go:195] Run: systemctl --version
	I1126 19:37:28.585550   25905 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-368879
	I1126 19:37:28.600913   25905 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/addons-368879/id_rsa Username:docker}
	I1126 19:37:28.697317   25905 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 19:37:28.697394   25905 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 19:37:28.725359   25905 cri.go:89] found id: "cff11b25559308e517bef61e7e64b2cf1d052d365ba751329767d36a353b778f"
	I1126 19:37:28.725394   25905 cri.go:89] found id: "212fd16c128eb496d22ee7050fd12a081b49ab88a5da4ad7ba0218f5c3a6afda"
	I1126 19:37:28.725402   25905 cri.go:89] found id: "9ad67d88c52b2a7f9372f1150442ba3a50bb8cf2680525e52a8619e4b2c8976b"
	I1126 19:37:28.725409   25905 cri.go:89] found id: "36ba93a3abe1946fafeb5c3585ccba2c0309402b73aa6b93fad1679ab9b8230c"
	I1126 19:37:28.725414   25905 cri.go:89] found id: "4197c12bab9b39cb37fcf577883d45acb6708a76bacaf452b81bbbac83ebc32d"
	I1126 19:37:28.725419   25905 cri.go:89] found id: "714962454fedbffb52b88389444757a4051639c470d89a7391a7d4f8507cd1ef"
	I1126 19:37:28.725424   25905 cri.go:89] found id: "603eac3a5db355f206ba3dc4ff6ed41cd4d2d57dd344b753ba2b220aea0ea935"
	I1126 19:37:28.725428   25905 cri.go:89] found id: "d97dad65e0c228f566bf65943a4547da6e571b1a6529c74a8ae4fca908aff071"
	I1126 19:37:28.725433   25905 cri.go:89] found id: "e825b2f37651c5a80788a75b90cfff51ac5bb5898b783c64714a77a867289740"
	I1126 19:37:28.725446   25905 cri.go:89] found id: "7dfc385a20d465a35afa574dc1af5a49eb3c8c1fa7ed51ee776a301a59c2361d"
	I1126 19:37:28.725464   25905 cri.go:89] found id: "47283ac77595bb9d7a2a45c36475aa798bdd174eb0fbf75c0f29ca447edce13c"
	I1126 19:37:28.725470   25905 cri.go:89] found id: "5a22bb7b95033629d4ca4f1e2bcfe31205813495c016ac6b92a54b3988f5aeb1"
	I1126 19:37:28.725480   25905 cri.go:89] found id: "efa536fb3d778fa3f66c962d9c414995c88acf53c34702183c66e5f69d36811f"
	I1126 19:37:28.725485   25905 cri.go:89] found id: "65470a1503151ad0a4d04f74207c75c03e574621ecbe88aa80a882447651e9dd"
	I1126 19:37:28.725490   25905 cri.go:89] found id: "eb50e6ab9debfa0a5504acf11c706fb150d3db77f0f345b4ad749fda6185bc6e"
	I1126 19:37:28.725501   25905 cri.go:89] found id: "719640c6c4cf6714ce19314de9295329020ce388e99867709ed34e6066f0a048"
	I1126 19:37:28.725509   25905 cri.go:89] found id: "d64fe5dcd9941dd86a15e1a24ecd4bcd3f1c12d60fa89d08729cb02ae1f3a859"
	I1126 19:37:28.725516   25905 cri.go:89] found id: "25e48df5dfb4bded98bc722a69d41cfb23ba0789fcd742e1bbb988ed73575038"
	I1126 19:37:28.725521   25905 cri.go:89] found id: "59ceeea3b62d8764e5784e2b3a3de748607f552e64062869f959f25d6d89b19c"
	I1126 19:37:28.725525   25905 cri.go:89] found id: "c71770537fdbd023344c0899f0ec3b332c10bdffa104f53bcb64a05e3456fdff"
	I1126 19:37:28.725533   25905 cri.go:89] found id: "6d9b40c465aff763d4d0644216d13f8019e5ff223b6cd4d430832c2712521310"
	I1126 19:37:28.725538   25905 cri.go:89] found id: "00f8c7ca3495a11998b111688472e6da4d0e62908b4fe1c119e083354e98f6e7"
	I1126 19:37:28.725545   25905 cri.go:89] found id: "f7ace30aee7afbfe85c6ffe68e9b951559babdf0fc3cfb0b08e0ecaba1352160"
	I1126 19:37:28.725548   25905 cri.go:89] found id: "beecb43fac96b0dc52fe3fc4694c61a44626abc7a7d215655c08387e375b3e65"
	I1126 19:37:28.725551   25905 cri.go:89] found id: ""
	I1126 19:37:28.725615   25905 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 19:37:28.738127   25905 out.go:203] 
	W1126 19:37:28.739207   25905 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:37:28Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:37:28Z" level=error msg="open /run/runc: no such file or directory"
	
	W1126 19:37:28.739232   25905 out.go:285] * 
	* 
	W1126 19:37:28.742136   25905 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1126 19:37:28.743208   25905 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-368879 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (13.53s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.38s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 2.849056ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-368879
addons_test.go:332: (dbg) Run:  kubectl --context addons-368879 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-368879 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-368879 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (234.650488ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1126 19:37:34.186343   26266 out.go:360] Setting OutFile to fd 1 ...
	I1126 19:37:34.186625   26266 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:37:34.186634   26266 out.go:374] Setting ErrFile to fd 2...
	I1126 19:37:34.186638   26266 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:37:34.186816   26266 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-10722/.minikube/bin
	I1126 19:37:34.187040   26266 mustload.go:66] Loading cluster: addons-368879
	I1126 19:37:34.187324   26266 config.go:182] Loaded profile config "addons-368879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:37:34.187341   26266 addons.go:622] checking whether the cluster is paused
	I1126 19:37:34.187417   26266 config.go:182] Loaded profile config "addons-368879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:37:34.187431   26266 host.go:66] Checking if "addons-368879" exists ...
	I1126 19:37:34.187818   26266 cli_runner.go:164] Run: docker container inspect addons-368879 --format={{.State.Status}}
	I1126 19:37:34.204590   26266 ssh_runner.go:195] Run: systemctl --version
	I1126 19:37:34.204639   26266 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-368879
	I1126 19:37:34.219983   26266 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/addons-368879/id_rsa Username:docker}
	I1126 19:37:34.315300   26266 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 19:37:34.315387   26266 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 19:37:34.343468   26266 cri.go:89] found id: "cff11b25559308e517bef61e7e64b2cf1d052d365ba751329767d36a353b778f"
	I1126 19:37:34.343487   26266 cri.go:89] found id: "212fd16c128eb496d22ee7050fd12a081b49ab88a5da4ad7ba0218f5c3a6afda"
	I1126 19:37:34.343492   26266 cri.go:89] found id: "9ad67d88c52b2a7f9372f1150442ba3a50bb8cf2680525e52a8619e4b2c8976b"
	I1126 19:37:34.343498   26266 cri.go:89] found id: "36ba93a3abe1946fafeb5c3585ccba2c0309402b73aa6b93fad1679ab9b8230c"
	I1126 19:37:34.343502   26266 cri.go:89] found id: "4197c12bab9b39cb37fcf577883d45acb6708a76bacaf452b81bbbac83ebc32d"
	I1126 19:37:34.343507   26266 cri.go:89] found id: "714962454fedbffb52b88389444757a4051639c470d89a7391a7d4f8507cd1ef"
	I1126 19:37:34.343512   26266 cri.go:89] found id: "603eac3a5db355f206ba3dc4ff6ed41cd4d2d57dd344b753ba2b220aea0ea935"
	I1126 19:37:34.343516   26266 cri.go:89] found id: "d97dad65e0c228f566bf65943a4547da6e571b1a6529c74a8ae4fca908aff071"
	I1126 19:37:34.343520   26266 cri.go:89] found id: "e825b2f37651c5a80788a75b90cfff51ac5bb5898b783c64714a77a867289740"
	I1126 19:37:34.343527   26266 cri.go:89] found id: "7dfc385a20d465a35afa574dc1af5a49eb3c8c1fa7ed51ee776a301a59c2361d"
	I1126 19:37:34.343535   26266 cri.go:89] found id: "47283ac77595bb9d7a2a45c36475aa798bdd174eb0fbf75c0f29ca447edce13c"
	I1126 19:37:34.343539   26266 cri.go:89] found id: "5a22bb7b95033629d4ca4f1e2bcfe31205813495c016ac6b92a54b3988f5aeb1"
	I1126 19:37:34.343544   26266 cri.go:89] found id: "efa536fb3d778fa3f66c962d9c414995c88acf53c34702183c66e5f69d36811f"
	I1126 19:37:34.343548   26266 cri.go:89] found id: "65470a1503151ad0a4d04f74207c75c03e574621ecbe88aa80a882447651e9dd"
	I1126 19:37:34.343552   26266 cri.go:89] found id: "eb50e6ab9debfa0a5504acf11c706fb150d3db77f0f345b4ad749fda6185bc6e"
	I1126 19:37:34.343561   26266 cri.go:89] found id: "719640c6c4cf6714ce19314de9295329020ce388e99867709ed34e6066f0a048"
	I1126 19:37:34.343566   26266 cri.go:89] found id: "d64fe5dcd9941dd86a15e1a24ecd4bcd3f1c12d60fa89d08729cb02ae1f3a859"
	I1126 19:37:34.343571   26266 cri.go:89] found id: "25e48df5dfb4bded98bc722a69d41cfb23ba0789fcd742e1bbb988ed73575038"
	I1126 19:37:34.343576   26266 cri.go:89] found id: "59ceeea3b62d8764e5784e2b3a3de748607f552e64062869f959f25d6d89b19c"
	I1126 19:37:34.343580   26266 cri.go:89] found id: "c71770537fdbd023344c0899f0ec3b332c10bdffa104f53bcb64a05e3456fdff"
	I1126 19:37:34.343585   26266 cri.go:89] found id: "6d9b40c465aff763d4d0644216d13f8019e5ff223b6cd4d430832c2712521310"
	I1126 19:37:34.343589   26266 cri.go:89] found id: "00f8c7ca3495a11998b111688472e6da4d0e62908b4fe1c119e083354e98f6e7"
	I1126 19:37:34.343592   26266 cri.go:89] found id: "f7ace30aee7afbfe85c6ffe68e9b951559babdf0fc3cfb0b08e0ecaba1352160"
	I1126 19:37:34.343596   26266 cri.go:89] found id: "beecb43fac96b0dc52fe3fc4694c61a44626abc7a7d215655c08387e375b3e65"
	I1126 19:37:34.343600   26266 cri.go:89] found id: ""
	I1126 19:37:34.343638   26266 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 19:37:34.358747   26266 out.go:203] 
	W1126 19:37:34.359998   26266 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:37:34Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:37:34Z" level=error msg="open /run/runc: no such file or directory"
	
	W1126 19:37:34.360021   26266 out.go:285] * 
	* 
	W1126 19:37:34.364092   26266 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1126 19:37:34.365332   26266 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-368879 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.38s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (146.62s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-368879 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-368879 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-368879 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [05e3f0cf-f584-45a4-8207-05bff33cd676] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [05e3f0cf-f584-45a4-8207-05bff33cd676] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.002978493s
I1126 19:37:23.658483   14258 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-368879 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-368879 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m15.289545369s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-368879 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-368879 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-368879
helpers_test.go:243: (dbg) docker inspect addons-368879:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c5a5d6b5ca141e91cffb14dacb2b3c8e7898d84341498be347204c5b7ef1bf1f",
	        "Created": "2025-11-26T19:35:21.207538359Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 16263,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-26T19:35:21.242595793Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/c5a5d6b5ca141e91cffb14dacb2b3c8e7898d84341498be347204c5b7ef1bf1f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c5a5d6b5ca141e91cffb14dacb2b3c8e7898d84341498be347204c5b7ef1bf1f/hostname",
	        "HostsPath": "/var/lib/docker/containers/c5a5d6b5ca141e91cffb14dacb2b3c8e7898d84341498be347204c5b7ef1bf1f/hosts",
	        "LogPath": "/var/lib/docker/containers/c5a5d6b5ca141e91cffb14dacb2b3c8e7898d84341498be347204c5b7ef1bf1f/c5a5d6b5ca141e91cffb14dacb2b3c8e7898d84341498be347204c5b7ef1bf1f-json.log",
	        "Name": "/addons-368879",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-368879:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-368879",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c5a5d6b5ca141e91cffb14dacb2b3c8e7898d84341498be347204c5b7ef1bf1f",
	                "LowerDir": "/var/lib/docker/overlay2/0d523fa88ed8896b4c412fe32c96c7888feffb0b8675ad8f5cecec48a7a18c10-init/diff:/var/lib/docker/overlay2/1661d775281cb7314a654558ee2c4e5880ffafb3a27a085032449cd60a68753a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0d523fa88ed8896b4c412fe32c96c7888feffb0b8675ad8f5cecec48a7a18c10/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0d523fa88ed8896b4c412fe32c96c7888feffb0b8675ad8f5cecec48a7a18c10/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0d523fa88ed8896b4c412fe32c96c7888feffb0b8675ad8f5cecec48a7a18c10/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-368879",
	                "Source": "/var/lib/docker/volumes/addons-368879/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-368879",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-368879",
	                "name.minikube.sigs.k8s.io": "addons-368879",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "1045bd9f6d8c38e8a848c1c51bb8163e146e1d17c95af24aedd024c0c52fdf6c",
	            "SandboxKey": "/var/run/docker/netns/1045bd9f6d8c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-368879": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d7d50d131ba94f9c1dcd0658d7aa81e19dda84f0c78ad10918d150767794fbb9",
	                    "EndpointID": "ba2b32dd65fd6c3b57eff8942bcf5fb1a66a971fa8132ac8c556cafc6c58b49d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "46:dc:7a:ab:0a:c1",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-368879",
	                        "c5a5d6b5ca14"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-368879 -n addons-368879
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-368879 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-368879 logs -n 25: (1.072008309s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-671361 --alsologtostderr --binary-mirror http://127.0.0.1:46231 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-671361 │ jenkins │ v1.37.0 │ 26 Nov 25 19:34 UTC │                     │
	│ delete  │ -p binary-mirror-671361                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-671361 │ jenkins │ v1.37.0 │ 26 Nov 25 19:34 UTC │ 26 Nov 25 19:34 UTC │
	│ addons  │ enable dashboard -p addons-368879                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-368879        │ jenkins │ v1.37.0 │ 26 Nov 25 19:34 UTC │                     │
	│ addons  │ disable dashboard -p addons-368879                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-368879        │ jenkins │ v1.37.0 │ 26 Nov 25 19:34 UTC │                     │
	│ start   │ -p addons-368879 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-368879        │ jenkins │ v1.37.0 │ 26 Nov 25 19:34 UTC │ 26 Nov 25 19:37 UTC │
	│ addons  │ addons-368879 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-368879        │ jenkins │ v1.37.0 │ 26 Nov 25 19:37 UTC │                     │
	│ addons  │ addons-368879 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-368879        │ jenkins │ v1.37.0 │ 26 Nov 25 19:37 UTC │                     │
	│ addons  │ enable headlamp -p addons-368879 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-368879        │ jenkins │ v1.37.0 │ 26 Nov 25 19:37 UTC │                     │
	│ addons  │ addons-368879 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-368879        │ jenkins │ v1.37.0 │ 26 Nov 25 19:37 UTC │                     │
	│ addons  │ addons-368879 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-368879        │ jenkins │ v1.37.0 │ 26 Nov 25 19:37 UTC │                     │
	│ ssh     │ addons-368879 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-368879        │ jenkins │ v1.37.0 │ 26 Nov 25 19:37 UTC │                     │
	│ addons  │ addons-368879 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-368879        │ jenkins │ v1.37.0 │ 26 Nov 25 19:37 UTC │                     │
	│ addons  │ addons-368879 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-368879        │ jenkins │ v1.37.0 │ 26 Nov 25 19:37 UTC │                     │
	│ ip      │ addons-368879 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-368879        │ jenkins │ v1.37.0 │ 26 Nov 25 19:37 UTC │ 26 Nov 25 19:37 UTC │
	│ addons  │ addons-368879 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-368879        │ jenkins │ v1.37.0 │ 26 Nov 25 19:37 UTC │                     │
	│ addons  │ addons-368879 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-368879        │ jenkins │ v1.37.0 │ 26 Nov 25 19:37 UTC │                     │
	│ addons  │ addons-368879 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-368879        │ jenkins │ v1.37.0 │ 26 Nov 25 19:37 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-368879                                                                                                                                                                                                                                                                                                                                                                                           │ addons-368879        │ jenkins │ v1.37.0 │ 26 Nov 25 19:37 UTC │ 26 Nov 25 19:37 UTC │
	│ addons  │ addons-368879 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-368879        │ jenkins │ v1.37.0 │ 26 Nov 25 19:37 UTC │                     │
	│ addons  │ addons-368879 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-368879        │ jenkins │ v1.37.0 │ 26 Nov 25 19:37 UTC │                     │
	│ ssh     │ addons-368879 ssh cat /opt/local-path-provisioner/pvc-73e84fea-39e5-4ca4-a00c-b412c775b12a_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-368879        │ jenkins │ v1.37.0 │ 26 Nov 25 19:37 UTC │ 26 Nov 25 19:37 UTC │
	│ addons  │ addons-368879 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-368879        │ jenkins │ v1.37.0 │ 26 Nov 25 19:37 UTC │                     │
	│ addons  │ addons-368879 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-368879        │ jenkins │ v1.37.0 │ 26 Nov 25 19:38 UTC │                     │
	│ addons  │ addons-368879 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-368879        │ jenkins │ v1.37.0 │ 26 Nov 25 19:38 UTC │                     │
	│ ip      │ addons-368879 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-368879        │ jenkins │ v1.37.0 │ 26 Nov 25 19:39 UTC │ 26 Nov 25 19:39 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/26 19:34:58
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1126 19:34:58.155861   15626 out.go:360] Setting OutFile to fd 1 ...
	I1126 19:34:58.156078   15626 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:34:58.156085   15626 out.go:374] Setting ErrFile to fd 2...
	I1126 19:34:58.156089   15626 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:34:58.156283   15626 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-10722/.minikube/bin
	I1126 19:34:58.156732   15626 out.go:368] Setting JSON to false
	I1126 19:34:58.157470   15626 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1048,"bootTime":1764184650,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1126 19:34:58.157513   15626 start.go:143] virtualization: kvm guest
	I1126 19:34:58.159197   15626 out.go:179] * [addons-368879] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1126 19:34:58.160360   15626 notify.go:221] Checking for updates...
	I1126 19:34:58.160381   15626 out.go:179]   - MINIKUBE_LOCATION=21974
	I1126 19:34:58.161556   15626 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1126 19:34:58.162705   15626 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21974-10722/kubeconfig
	I1126 19:34:58.163709   15626 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-10722/.minikube
	I1126 19:34:58.164666   15626 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1126 19:34:58.165667   15626 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1126 19:34:58.167022   15626 driver.go:422] Setting default libvirt URI to qemu:///system
	I1126 19:34:58.189593   15626 docker.go:124] docker version: linux-29.0.4:Docker Engine - Community
	I1126 19:34:58.189698   15626 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 19:34:58.244787   15626 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-26 19:34:58.235994448 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1126 19:34:58.244876   15626 docker.go:319] overlay module found
	I1126 19:34:58.246333   15626 out.go:179] * Using the docker driver based on user configuration
	I1126 19:34:58.247200   15626 start.go:309] selected driver: docker
	I1126 19:34:58.247212   15626 start.go:927] validating driver "docker" against <nil>
	I1126 19:34:58.247221   15626 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1126 19:34:58.247723   15626 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 19:34:58.298416   15626 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-26 19:34:58.290008365 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1126 19:34:58.298577   15626 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1126 19:34:58.298779   15626 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1126 19:34:58.300228   15626 out.go:179] * Using Docker driver with root privileges
	I1126 19:34:58.301260   15626 cni.go:84] Creating CNI manager for ""
	I1126 19:34:58.301317   15626 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 19:34:58.301328   15626 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1126 19:34:58.301387   15626 start.go:353] cluster config:
	{Name:addons-368879 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-368879 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1126 19:34:58.302546   15626 out.go:179] * Starting "addons-368879" primary control-plane node in "addons-368879" cluster
	I1126 19:34:58.303464   15626 cache.go:134] Beginning downloading kic base image for docker with crio
	I1126 19:34:58.304550   15626 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1126 19:34:58.305549   15626 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 19:34:58.305574   15626 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21974-10722/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1126 19:34:58.305581   15626 cache.go:65] Caching tarball of preloaded images
	I1126 19:34:58.305640   15626 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1126 19:34:58.305667   15626 preload.go:238] Found /home/jenkins/minikube-integration/21974-10722/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1126 19:34:58.305675   15626 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1126 19:34:58.305979   15626 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/config.json ...
	I1126 19:34:58.306008   15626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/config.json: {Name:mkf0e501ca958c4c4e8ce566039c46c9b04d2c95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:34:58.320818   15626 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b to local cache
	I1126 19:34:58.320924   15626 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local cache directory
	I1126 19:34:58.320946   15626 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local cache directory, skipping pull
	I1126 19:34:58.320952   15626 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in cache, skipping pull
	I1126 19:34:58.320958   15626 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b as a tarball
	I1126 19:34:58.320963   15626 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b from local cache
	I1126 19:35:10.156164   15626 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b from cached tarball
	I1126 19:35:10.156199   15626 cache.go:243] Successfully downloaded all kic artifacts
	I1126 19:35:10.156240   15626 start.go:360] acquireMachinesLock for addons-368879: {Name:mk3b87926377a18b5a2efa47c95e4b5d36fee531 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1126 19:35:10.156337   15626 start.go:364] duration metric: took 75.941µs to acquireMachinesLock for "addons-368879"
	I1126 19:35:10.156368   15626 start.go:93] Provisioning new machine with config: &{Name:addons-368879 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-368879 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1126 19:35:10.156437   15626 start.go:125] createHost starting for "" (driver="docker")
	I1126 19:35:10.157865   15626 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1126 19:35:10.158055   15626 start.go:159] libmachine.API.Create for "addons-368879" (driver="docker")
	I1126 19:35:10.158092   15626 client.go:173] LocalClient.Create starting
	I1126 19:35:10.158227   15626 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem
	I1126 19:35:10.246048   15626 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem
	I1126 19:35:10.323163   15626 cli_runner.go:164] Run: docker network inspect addons-368879 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1126 19:35:10.340157   15626 cli_runner.go:211] docker network inspect addons-368879 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1126 19:35:10.340223   15626 network_create.go:284] running [docker network inspect addons-368879] to gather additional debugging logs...
	I1126 19:35:10.340239   15626 cli_runner.go:164] Run: docker network inspect addons-368879
	W1126 19:35:10.355499   15626 cli_runner.go:211] docker network inspect addons-368879 returned with exit code 1
	I1126 19:35:10.355526   15626 network_create.go:287] error running [docker network inspect addons-368879]: docker network inspect addons-368879: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-368879 not found
	I1126 19:35:10.355540   15626 network_create.go:289] output of [docker network inspect addons-368879]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-368879 not found
	
	** /stderr **
	I1126 19:35:10.355629   15626 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1126 19:35:10.370616   15626 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d818d0}
	I1126 19:35:10.370662   15626 network_create.go:124] attempt to create docker network addons-368879 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1126 19:35:10.370702   15626 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-368879 addons-368879
	I1126 19:35:10.413482   15626 network_create.go:108] docker network addons-368879 192.168.49.0/24 created
	I1126 19:35:10.413518   15626 kic.go:121] calculated static IP "192.168.49.2" for the "addons-368879" container
	I1126 19:35:10.413582   15626 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1126 19:35:10.428058   15626 cli_runner.go:164] Run: docker volume create addons-368879 --label name.minikube.sigs.k8s.io=addons-368879 --label created_by.minikube.sigs.k8s.io=true
	I1126 19:35:10.442836   15626 oci.go:103] Successfully created a docker volume addons-368879
	I1126 19:35:10.442944   15626 cli_runner.go:164] Run: docker run --rm --name addons-368879-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-368879 --entrypoint /usr/bin/test -v addons-368879:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1126 19:35:16.843863   15626 cli_runner.go:217] Completed: docker run --rm --name addons-368879-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-368879 --entrypoint /usr/bin/test -v addons-368879:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib: (6.400878368s)
	I1126 19:35:16.843902   15626 oci.go:107] Successfully prepared a docker volume addons-368879
	I1126 19:35:16.843972   15626 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 19:35:16.843994   15626 kic.go:194] Starting extracting preloaded images to volume ...
	I1126 19:35:16.844045   15626 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21974-10722/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-368879:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir
	I1126 19:35:21.138174   15626 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21974-10722/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-368879:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir: (4.294095631s)
	I1126 19:35:21.138200   15626 kic.go:203] duration metric: took 4.294212367s to extract preloaded images to volume ...
	W1126 19:35:21.138283   15626 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1126 19:35:21.138311   15626 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1126 19:35:21.138358   15626 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1126 19:35:21.192733   15626 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-368879 --name addons-368879 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-368879 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-368879 --network addons-368879 --ip 192.168.49.2 --volume addons-368879:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1126 19:35:21.488347   15626 cli_runner.go:164] Run: docker container inspect addons-368879 --format={{.State.Running}}
	I1126 19:35:21.506665   15626 cli_runner.go:164] Run: docker container inspect addons-368879 --format={{.State.Status}}
	I1126 19:35:21.524524   15626 cli_runner.go:164] Run: docker exec addons-368879 stat /var/lib/dpkg/alternatives/iptables
	I1126 19:35:21.568584   15626 oci.go:144] the created container "addons-368879" has a running status.
	I1126 19:35:21.568611   15626 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21974-10722/.minikube/machines/addons-368879/id_rsa...
	I1126 19:35:21.584666   15626 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21974-10722/.minikube/machines/addons-368879/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1126 19:35:21.609405   15626 cli_runner.go:164] Run: docker container inspect addons-368879 --format={{.State.Status}}
	I1126 19:35:21.626611   15626 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1126 19:35:21.626631   15626 kic_runner.go:114] Args: [docker exec --privileged addons-368879 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1126 19:35:21.672761   15626 cli_runner.go:164] Run: docker container inspect addons-368879 --format={{.State.Status}}
	I1126 19:35:21.692040   15626 machine.go:94] provisionDockerMachine start ...
	I1126 19:35:21.692138   15626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-368879
	I1126 19:35:21.713531   15626 main.go:143] libmachine: Using SSH client type: native
	I1126 19:35:21.713781   15626 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1126 19:35:21.713796   15626 main.go:143] libmachine: About to run SSH command:
	hostname
	I1126 19:35:21.715014   15626 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33214->127.0.0.1:32768: read: connection reset by peer
	I1126 19:35:24.850665   15626 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-368879
	
	I1126 19:35:24.850694   15626 ubuntu.go:182] provisioning hostname "addons-368879"
	I1126 19:35:24.850751   15626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-368879
	I1126 19:35:24.867845   15626 main.go:143] libmachine: Using SSH client type: native
	I1126 19:35:24.868054   15626 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1126 19:35:24.868066   15626 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-368879 && echo "addons-368879" | sudo tee /etc/hostname
	I1126 19:35:25.009358   15626 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-368879
	
	I1126 19:35:25.009441   15626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-368879
	I1126 19:35:25.027454   15626 main.go:143] libmachine: Using SSH client type: native
	I1126 19:35:25.027658   15626 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1126 19:35:25.027675   15626 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-368879' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-368879/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-368879' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1126 19:35:25.161204   15626 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1126 19:35:25.161227   15626 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21974-10722/.minikube CaCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21974-10722/.minikube}
	I1126 19:35:25.161255   15626 ubuntu.go:190] setting up certificates
	I1126 19:35:25.161266   15626 provision.go:84] configureAuth start
	I1126 19:35:25.161323   15626 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-368879
	I1126 19:35:25.177653   15626 provision.go:143] copyHostCerts
	I1126 19:35:25.177718   15626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/ca.pem (1078 bytes)
	I1126 19:35:25.177841   15626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/cert.pem (1123 bytes)
	I1126 19:35:25.177912   15626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/key.pem (1675 bytes)
	I1126 19:35:25.177963   15626 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem org=jenkins.addons-368879 san=[127.0.0.1 192.168.49.2 addons-368879 localhost minikube]
	I1126 19:35:25.201322   15626 provision.go:177] copyRemoteCerts
	I1126 19:35:25.201367   15626 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1126 19:35:25.201399   15626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-368879
	I1126 19:35:25.217022   15626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/addons-368879/id_rsa Username:docker}
	I1126 19:35:25.312375   15626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1126 19:35:25.329560   15626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1126 19:35:25.344739   15626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1126 19:35:25.360149   15626 provision.go:87] duration metric: took 198.873025ms to configureAuth
	I1126 19:35:25.360169   15626 ubuntu.go:206] setting minikube options for container-runtime
	I1126 19:35:25.360320   15626 config.go:182] Loaded profile config "addons-368879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:35:25.360415   15626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-368879
	I1126 19:35:25.376890   15626 main.go:143] libmachine: Using SSH client type: native
	I1126 19:35:25.377089   15626 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1126 19:35:25.377105   15626 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1126 19:35:25.645667   15626 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1126 19:35:25.645695   15626 machine.go:97] duration metric: took 3.953625498s to provisionDockerMachine
	I1126 19:35:25.645707   15626 client.go:176] duration metric: took 15.487604821s to LocalClient.Create
	I1126 19:35:25.645728   15626 start.go:167] duration metric: took 15.487672535s to libmachine.API.Create "addons-368879"
	I1126 19:35:25.645737   15626 start.go:293] postStartSetup for "addons-368879" (driver="docker")
	I1126 19:35:25.645752   15626 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1126 19:35:25.645823   15626 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1126 19:35:25.645868   15626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-368879
	I1126 19:35:25.663631   15626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/addons-368879/id_rsa Username:docker}
	I1126 19:35:25.761398   15626 ssh_runner.go:195] Run: cat /etc/os-release
	I1126 19:35:25.764411   15626 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1126 19:35:25.764442   15626 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1126 19:35:25.764453   15626 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-10722/.minikube/addons for local assets ...
	I1126 19:35:25.764517   15626 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-10722/.minikube/files for local assets ...
	I1126 19:35:25.764549   15626 start.go:296] duration metric: took 118.804834ms for postStartSetup
	I1126 19:35:25.764818   15626 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-368879
	I1126 19:35:25.781036   15626 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/config.json ...
	I1126 19:35:25.781283   15626 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1126 19:35:25.781329   15626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-368879
	I1126 19:35:25.796877   15626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/addons-368879/id_rsa Username:docker}
	I1126 19:35:25.889609   15626 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1126 19:35:25.893559   15626 start.go:128] duration metric: took 15.737106954s to createHost
	I1126 19:35:25.893578   15626 start.go:83] releasing machines lock for "addons-368879", held for 15.737227352s
	I1126 19:35:25.893626   15626 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-368879
	I1126 19:35:25.909889   15626 ssh_runner.go:195] Run: cat /version.json
	I1126 19:35:25.909934   15626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-368879
	I1126 19:35:25.909990   15626 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1126 19:35:25.910065   15626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-368879
	I1126 19:35:25.927286   15626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/addons-368879/id_rsa Username:docker}
	I1126 19:35:25.927779   15626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/addons-368879/id_rsa Username:docker}
	I1126 19:35:26.019734   15626 ssh_runner.go:195] Run: systemctl --version
	I1126 19:35:26.092394   15626 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1126 19:35:26.124073   15626 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1126 19:35:26.128247   15626 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1126 19:35:26.128313   15626 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1126 19:35:26.151250   15626 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1126 19:35:26.151268   15626 start.go:496] detecting cgroup driver to use...
	I1126 19:35:26.151292   15626 detect.go:190] detected "systemd" cgroup driver on host os
	I1126 19:35:26.151322   15626 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1126 19:35:26.165164   15626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1126 19:35:26.175666   15626 docker.go:218] disabling cri-docker service (if available) ...
	I1126 19:35:26.175709   15626 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1126 19:35:26.190094   15626 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1126 19:35:26.205207   15626 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1126 19:35:26.279881   15626 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1126 19:35:26.361506   15626 docker.go:234] disabling docker service ...
	I1126 19:35:26.361556   15626 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1126 19:35:26.378076   15626 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1126 19:35:26.389425   15626 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1126 19:35:26.470171   15626 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1126 19:35:26.547490   15626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1126 19:35:26.558374   15626 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1126 19:35:26.571231   15626 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1126 19:35:26.571301   15626 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 19:35:26.580474   15626 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1126 19:35:26.580543   15626 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 19:35:26.588195   15626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 19:35:26.595858   15626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 19:35:26.603584   15626 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1126 19:35:26.610601   15626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 19:35:26.618164   15626 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 19:35:26.630037   15626 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 19:35:26.637709   15626 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1126 19:35:26.644146   15626 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1126 19:35:26.644191   15626 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1126 19:35:26.654993   15626 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1126 19:35:26.661547   15626 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 19:35:26.736695   15626 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1126 19:35:26.864227   15626 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1126 19:35:26.864298   15626 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1126 19:35:26.867802   15626 start.go:564] Will wait 60s for crictl version
	I1126 19:35:26.867849   15626 ssh_runner.go:195] Run: which crictl
	I1126 19:35:26.871019   15626 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1126 19:35:26.893360   15626 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1126 19:35:26.893451   15626 ssh_runner.go:195] Run: crio --version
	I1126 19:35:26.918429   15626 ssh_runner.go:195] Run: crio --version
	I1126 19:35:26.944661   15626 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1126 19:35:26.945711   15626 cli_runner.go:164] Run: docker network inspect addons-368879 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1126 19:35:26.961960   15626 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1126 19:35:26.965528   15626 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 19:35:26.974923   15626 kubeadm.go:884] updating cluster {Name:addons-368879 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-368879 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1126 19:35:26.975021   15626 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 19:35:26.975063   15626 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 19:35:27.004349   15626 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 19:35:27.004368   15626 crio.go:433] Images already preloaded, skipping extraction
	I1126 19:35:27.004414   15626 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 19:35:27.027312   15626 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 19:35:27.027331   15626 cache_images.go:86] Images are preloaded, skipping loading
	I1126 19:35:27.027338   15626 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1126 19:35:27.027433   15626 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-368879 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-368879 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1126 19:35:27.027514   15626 ssh_runner.go:195] Run: crio config
	I1126 19:35:27.068260   15626 cni.go:84] Creating CNI manager for ""
	I1126 19:35:27.068283   15626 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 19:35:27.068300   15626 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1126 19:35:27.068319   15626 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-368879 NodeName:addons-368879 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1126 19:35:27.068452   15626 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-368879"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1126 19:35:27.068530   15626 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1126 19:35:27.075884   15626 binaries.go:51] Found k8s binaries, skipping transfer
	I1126 19:35:27.075938   15626 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1126 19:35:27.082894   15626 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1126 19:35:27.094452   15626 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1126 19:35:27.108321   15626 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1126 19:35:27.119434   15626 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1126 19:35:27.122497   15626 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 19:35:27.131097   15626 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 19:35:27.205743   15626 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 19:35:27.226309   15626 certs.go:69] Setting up /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879 for IP: 192.168.49.2
	I1126 19:35:27.226329   15626 certs.go:195] generating shared ca certs ...
	I1126 19:35:27.226347   15626 certs.go:227] acquiring lock for ca certs: {Name:mk4795cc985d88cb969cdd6a9c35d3c72f02dfea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:35:27.226480   15626 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21974-10722/.minikube/ca.key
	I1126 19:35:27.266098   15626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-10722/.minikube/ca.crt ...
	I1126 19:35:27.266120   15626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/ca.crt: {Name:mk08fe333e2718aa9edd591caefe2790eeb5ee03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:35:27.266282   15626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-10722/.minikube/ca.key ...
	I1126 19:35:27.266296   15626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/ca.key: {Name:mka51114cd9cf1bef98339a3911048402c34d92a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:35:27.266397   15626 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.key
	I1126 19:35:27.367200   15626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.crt ...
	I1126 19:35:27.367221   15626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.crt: {Name:mk9667acd9406cd8f55b4e5d2ce62084c1571746 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:35:27.367379   15626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.key ...
	I1126 19:35:27.367396   15626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.key: {Name:mk31ed3fba07b16735240e6c762ea28b2931504e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:35:27.367508   15626 certs.go:257] generating profile certs ...
	I1126 19:35:27.367563   15626 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/client.key
	I1126 19:35:27.367576   15626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/client.crt with IP's: []
	I1126 19:35:27.445350   15626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/client.crt ...
	I1126 19:35:27.445369   15626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/client.crt: {Name:mk13337429698fea7d30e4adeecfa0bf36f32c90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:35:27.445523   15626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/client.key ...
	I1126 19:35:27.445537   15626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/client.key: {Name:mk5c06ab1f23d6acc5f1b73e1dd4952a8de6d5ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:35:27.445637   15626 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/apiserver.key.743572d0
	I1126 19:35:27.445656   15626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/apiserver.crt.743572d0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1126 19:35:27.592468   15626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/apiserver.crt.743572d0 ...
	I1126 19:35:27.592489   15626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/apiserver.crt.743572d0: {Name:mk7d32c35019d4cd63bfbdcd4906e3c002cfa51d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:35:27.592641   15626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/apiserver.key.743572d0 ...
	I1126 19:35:27.592657   15626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/apiserver.key.743572d0: {Name:mkd98dad1bb4c177a838df62609be2b8b55f5481 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:35:27.592753   15626 certs.go:382] copying /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/apiserver.crt.743572d0 -> /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/apiserver.crt
	I1126 19:35:27.592830   15626 certs.go:386] copying /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/apiserver.key.743572d0 -> /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/apiserver.key
	I1126 19:35:27.592878   15626 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/proxy-client.key
	I1126 19:35:27.592894   15626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/proxy-client.crt with IP's: []
	I1126 19:35:27.722491   15626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/proxy-client.crt ...
	I1126 19:35:27.722510   15626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/proxy-client.crt: {Name:mkdb50128ffcd4eb9744e0b6126b238e19b333f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:35:27.722651   15626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/proxy-client.key ...
	I1126 19:35:27.722664   15626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/proxy-client.key: {Name:mk0567f2bc89b129782af6e1ddd0b88433338274 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:35:27.722886   15626 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem (1679 bytes)
	I1126 19:35:27.722924   15626 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem (1078 bytes)
	I1126 19:35:27.722950   15626 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem (1123 bytes)
	I1126 19:35:27.722973   15626 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem (1675 bytes)
	I1126 19:35:27.723535   15626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1126 19:35:27.740362   15626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1126 19:35:27.756213   15626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1126 19:35:27.771714   15626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1126 19:35:27.787072   15626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1126 19:35:27.802591   15626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1126 19:35:27.818067   15626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1126 19:35:27.833188   15626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1126 19:35:27.848496   15626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1126 19:35:27.865570   15626 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1126 19:35:27.876587   15626 ssh_runner.go:195] Run: openssl version
	I1126 19:35:27.881989   15626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1126 19:35:27.891300   15626 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1126 19:35:27.894489   15626 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 26 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1126 19:35:27.894530   15626 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1126 19:35:27.927259   15626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1126 19:35:27.935372   15626 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1126 19:35:27.938751   15626 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1126 19:35:27.938802   15626 kubeadm.go:401] StartCluster: {Name:addons-368879 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-368879 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 19:35:27.938876   15626 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 19:35:27.938911   15626 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 19:35:27.966560   15626 cri.go:89] found id: ""
	I1126 19:35:27.966621   15626 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1126 19:35:27.973848   15626 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1126 19:35:27.980834   15626 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1126 19:35:27.980880   15626 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1126 19:35:27.987536   15626 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1126 19:35:27.987551   15626 kubeadm.go:158] found existing configuration files:
	
	I1126 19:35:27.987584   15626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1126 19:35:27.994408   15626 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1126 19:35:27.994440   15626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1126 19:35:28.000807   15626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1126 19:35:28.007486   15626 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1126 19:35:28.007529   15626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1126 19:35:28.013850   15626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1126 19:35:28.020601   15626 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1126 19:35:28.020631   15626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1126 19:35:28.026832   15626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1126 19:35:28.033413   15626 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1126 19:35:28.033445   15626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1126 19:35:28.039815   15626 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1126 19:35:28.091882   15626 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1126 19:35:28.143294   15626 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1126 19:35:36.677875   15626 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1126 19:35:36.677952   15626 kubeadm.go:319] [preflight] Running pre-flight checks
	I1126 19:35:36.678067   15626 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1126 19:35:36.678137   15626 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1126 19:35:36.678186   15626 kubeadm.go:319] OS: Linux
	I1126 19:35:36.678236   15626 kubeadm.go:319] CGROUPS_CPU: enabled
	I1126 19:35:36.678282   15626 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1126 19:35:36.678326   15626 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1126 19:35:36.678372   15626 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1126 19:35:36.678414   15626 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1126 19:35:36.678481   15626 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1126 19:35:36.678525   15626 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1126 19:35:36.678572   15626 kubeadm.go:319] CGROUPS_IO: enabled
	I1126 19:35:36.678635   15626 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1126 19:35:36.678769   15626 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1126 19:35:36.678900   15626 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1126 19:35:36.678975   15626 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1126 19:35:36.680434   15626 out.go:252]   - Generating certificates and keys ...
	I1126 19:35:36.680526   15626 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1126 19:35:36.680599   15626 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1126 19:35:36.680675   15626 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1126 19:35:36.680740   15626 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1126 19:35:36.680820   15626 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1126 19:35:36.680880   15626 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1126 19:35:36.680932   15626 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1126 19:35:36.681034   15626 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-368879 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1126 19:35:36.681085   15626 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1126 19:35:36.681192   15626 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-368879 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1126 19:35:36.681248   15626 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1126 19:35:36.681303   15626 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1126 19:35:36.681346   15626 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1126 19:35:36.681400   15626 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1126 19:35:36.681451   15626 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1126 19:35:36.681547   15626 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1126 19:35:36.681627   15626 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1126 19:35:36.681752   15626 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1126 19:35:36.681810   15626 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1126 19:35:36.681909   15626 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1126 19:35:36.681973   15626 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1126 19:35:36.683164   15626 out.go:252]   - Booting up control plane ...
	I1126 19:35:36.683231   15626 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1126 19:35:36.683313   15626 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1126 19:35:36.683375   15626 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1126 19:35:36.683485   15626 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1126 19:35:36.683597   15626 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1126 19:35:36.683703   15626 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1126 19:35:36.683773   15626 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1126 19:35:36.683808   15626 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1126 19:35:36.683913   15626 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1126 19:35:36.684013   15626 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1126 19:35:36.684072   15626 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001561636s
	I1126 19:35:36.684153   15626 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1126 19:35:36.684230   15626 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1126 19:35:36.684311   15626 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1126 19:35:36.684380   15626 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1126 19:35:36.684439   15626 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.101321794s
	I1126 19:35:36.684512   15626 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.848795955s
	I1126 19:35:36.684576   15626 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501880389s
	I1126 19:35:36.684666   15626 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1126 19:35:36.684786   15626 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1126 19:35:36.684870   15626 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1126 19:35:36.685062   15626 kubeadm.go:319] [mark-control-plane] Marking the node addons-368879 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1126 19:35:36.685154   15626 kubeadm.go:319] [bootstrap-token] Using token: ooclz9.4sx22jlmjqnuuxe0
	I1126 19:35:36.686446   15626 out.go:252]   - Configuring RBAC rules ...
	I1126 19:35:36.686584   15626 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1126 19:35:36.686686   15626 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1126 19:35:36.686826   15626 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1126 19:35:36.686946   15626 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1126 19:35:36.687088   15626 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1126 19:35:36.687184   15626 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1126 19:35:36.687315   15626 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1126 19:35:36.687377   15626 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1126 19:35:36.687450   15626 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1126 19:35:36.687473   15626 kubeadm.go:319] 
	I1126 19:35:36.687552   15626 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1126 19:35:36.687561   15626 kubeadm.go:319] 
	I1126 19:35:36.687672   15626 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1126 19:35:36.687681   15626 kubeadm.go:319] 
	I1126 19:35:36.687723   15626 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1126 19:35:36.687800   15626 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1126 19:35:36.687844   15626 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1126 19:35:36.687849   15626 kubeadm.go:319] 
	I1126 19:35:36.687922   15626 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1126 19:35:36.687931   15626 kubeadm.go:319] 
	I1126 19:35:36.687985   15626 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1126 19:35:36.687992   15626 kubeadm.go:319] 
	I1126 19:35:36.688036   15626 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1126 19:35:36.688095   15626 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1126 19:35:36.688185   15626 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1126 19:35:36.688195   15626 kubeadm.go:319] 
	I1126 19:35:36.688317   15626 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1126 19:35:36.688419   15626 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1126 19:35:36.688426   15626 kubeadm.go:319] 
	I1126 19:35:36.688556   15626 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token ooclz9.4sx22jlmjqnuuxe0 \
	I1126 19:35:36.688668   15626 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:9cd90c79a54686c211eb288f3e905de978807b759c474caa965189d211d6551b \
	I1126 19:35:36.688692   15626 kubeadm.go:319] 	--control-plane 
	I1126 19:35:36.688701   15626 kubeadm.go:319] 
	I1126 19:35:36.688776   15626 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1126 19:35:36.688782   15626 kubeadm.go:319] 
	I1126 19:35:36.688857   15626 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token ooclz9.4sx22jlmjqnuuxe0 \
	I1126 19:35:36.688952   15626 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:9cd90c79a54686c211eb288f3e905de978807b759c474caa965189d211d6551b 
	I1126 19:35:36.688962   15626 cni.go:84] Creating CNI manager for ""
	I1126 19:35:36.688967   15626 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 19:35:36.690165   15626 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1126 19:35:36.691257   15626 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1126 19:35:36.695142   15626 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1126 19:35:36.695159   15626 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1126 19:35:36.707382   15626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1126 19:35:36.893563   15626 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1126 19:35:36.893660   15626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 19:35:36.893726   15626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-368879 minikube.k8s.io/updated_at=2025_11_26T19_35_36_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970 minikube.k8s.io/name=addons-368879 minikube.k8s.io/primary=true
	I1126 19:35:36.904375   15626 ops.go:34] apiserver oom_adj: -16
	I1126 19:35:36.963693   15626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 19:35:37.463734   15626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 19:35:37.964432   15626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 19:35:38.464280   15626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 19:35:38.964662   15626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 19:35:39.464696   15626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 19:35:39.964646   15626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 19:35:40.463850   15626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 19:35:40.964412   15626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 19:35:41.464370   15626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 19:35:41.524765   15626 kubeadm.go:1114] duration metric: took 4.631161589s to wait for elevateKubeSystemPrivileges
	I1126 19:35:41.524806   15626 kubeadm.go:403] duration metric: took 13.586008107s to StartCluster
	I1126 19:35:41.524826   15626 settings.go:142] acquiring lock: {Name:mkeab3f1dbcfb88a16fda32bff13c520a9a811ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:35:41.524926   15626 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21974-10722/kubeconfig
	I1126 19:35:41.525289   15626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/kubeconfig: {Name:mk6fc897a3190afd853d8f239c80394df10dbd39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:35:41.525487   15626 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1126 19:35:41.525500   15626 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1126 19:35:41.525551   15626 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1126 19:35:41.525689   15626 config.go:182] Loaded profile config "addons-368879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:35:41.525701   15626 addons.go:70] Setting default-storageclass=true in profile "addons-368879"
	I1126 19:35:41.525717   15626 addons.go:70] Setting metrics-server=true in profile "addons-368879"
	I1126 19:35:41.525724   15626 addons.go:70] Setting cloud-spanner=true in profile "addons-368879"
	I1126 19:35:41.525742   15626 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-368879"
	I1126 19:35:41.525745   15626 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-368879"
	I1126 19:35:41.525751   15626 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-368879"
	I1126 19:35:41.525730   15626 addons.go:70] Setting inspektor-gadget=true in profile "addons-368879"
	I1126 19:35:41.525769   15626 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-368879"
	I1126 19:35:41.525778   15626 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-368879"
	I1126 19:35:41.525780   15626 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-368879"
	I1126 19:35:41.525784   15626 addons.go:70] Setting gcp-auth=true in profile "addons-368879"
	I1126 19:35:41.525798   15626 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-368879"
	I1126 19:35:41.525802   15626 mustload.go:66] Loading cluster: addons-368879
	I1126 19:35:41.525780   15626 addons.go:70] Setting storage-provisioner=true in profile "addons-368879"
	I1126 19:35:41.525832   15626 host.go:66] Checking if "addons-368879" exists ...
	I1126 19:35:41.525838   15626 addons.go:239] Setting addon storage-provisioner=true in "addons-368879"
	I1126 19:35:41.525886   15626 host.go:66] Checking if "addons-368879" exists ...
	I1126 19:35:41.525701   15626 addons.go:70] Setting yakd=true in profile "addons-368879"
	I1126 19:35:41.525905   15626 addons.go:239] Setting addon yakd=true in "addons-368879"
	I1126 19:35:41.525923   15626 host.go:66] Checking if "addons-368879" exists ...
	I1126 19:35:41.525974   15626 config.go:182] Loaded profile config "addons-368879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:35:41.526069   15626 addons.go:70] Setting ingress=true in profile "addons-368879"
	I1126 19:35:41.526083   15626 addons.go:239] Setting addon ingress=true in "addons-368879"
	I1126 19:35:41.526109   15626 host.go:66] Checking if "addons-368879" exists ...
	I1126 19:35:41.526200   15626 cli_runner.go:164] Run: docker container inspect addons-368879 --format={{.State.Status}}
	I1126 19:35:41.526214   15626 addons.go:70] Setting volcano=true in profile "addons-368879"
	I1126 19:35:41.526229   15626 addons.go:239] Setting addon volcano=true in "addons-368879"
	I1126 19:35:41.526229   15626 addons.go:70] Setting ingress-dns=true in profile "addons-368879"
	I1126 19:35:41.526242   15626 addons.go:239] Setting addon ingress-dns=true in "addons-368879"
	I1126 19:35:41.526251   15626 host.go:66] Checking if "addons-368879" exists ...
	I1126 19:35:41.526275   15626 host.go:66] Checking if "addons-368879" exists ...
	I1126 19:35:41.526346   15626 cli_runner.go:164] Run: docker container inspect addons-368879 --format={{.State.Status}}
	I1126 19:35:41.526379   15626 cli_runner.go:164] Run: docker container inspect addons-368879 --format={{.State.Status}}
	I1126 19:35:41.526386   15626 cli_runner.go:164] Run: docker container inspect addons-368879 --format={{.State.Status}}
	I1126 19:35:41.526560   15626 cli_runner.go:164] Run: docker container inspect addons-368879 --format={{.State.Status}}
	I1126 19:35:41.526718   15626 cli_runner.go:164] Run: docker container inspect addons-368879 --format={{.State.Status}}
	I1126 19:35:41.526879   15626 addons.go:70] Setting volumesnapshots=true in profile "addons-368879"
	I1126 19:35:41.526903   15626 addons.go:239] Setting addon volumesnapshots=true in "addons-368879"
	I1126 19:35:41.526932   15626 host.go:66] Checking if "addons-368879" exists ...
	I1126 19:35:41.526824   15626 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-368879"
	I1126 19:35:41.526956   15626 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-368879"
	I1126 19:35:41.526987   15626 host.go:66] Checking if "addons-368879" exists ...
	I1126 19:35:41.525774   15626 addons.go:239] Setting addon inspektor-gadget=true in "addons-368879"
	I1126 19:35:41.526219   15626 cli_runner.go:164] Run: docker container inspect addons-368879 --format={{.State.Status}}
	I1126 19:35:41.527366   15626 cli_runner.go:164] Run: docker container inspect addons-368879 --format={{.State.Status}}
	I1126 19:35:41.527677   15626 host.go:66] Checking if "addons-368879" exists ...
	I1126 19:35:41.526760   15626 cli_runner.go:164] Run: docker container inspect addons-368879 --format={{.State.Status}}
	I1126 19:35:41.528197   15626 cli_runner.go:164] Run: docker container inspect addons-368879 --format={{.State.Status}}
	I1126 19:35:41.525743   15626 addons.go:239] Setting addon metrics-server=true in "addons-368879"
	I1126 19:35:41.528281   15626 host.go:66] Checking if "addons-368879" exists ...
	I1126 19:35:41.528778   15626 cli_runner.go:164] Run: docker container inspect addons-368879 --format={{.State.Status}}
	I1126 19:35:41.525772   15626 addons.go:239] Setting addon cloud-spanner=true in "addons-368879"
	I1126 19:35:41.529087   15626 host.go:66] Checking if "addons-368879" exists ...
	I1126 19:35:41.529611   15626 addons.go:70] Setting registry-creds=true in profile "addons-368879"
	I1126 19:35:41.529649   15626 addons.go:239] Setting addon registry-creds=true in "addons-368879"
	I1126 19:35:41.529684   15626 host.go:66] Checking if "addons-368879" exists ...
	I1126 19:35:41.529967   15626 cli_runner.go:164] Run: docker container inspect addons-368879 --format={{.State.Status}}
	I1126 19:35:41.530176   15626 cli_runner.go:164] Run: docker container inspect addons-368879 --format={{.State.Status}}
	I1126 19:35:41.531564   15626 out.go:179] * Verifying Kubernetes components...
	I1126 19:35:41.526811   15626 addons.go:70] Setting registry=true in profile "addons-368879"
	I1126 19:35:41.531622   15626 addons.go:239] Setting addon registry=true in "addons-368879"
	I1126 19:35:41.531649   15626 host.go:66] Checking if "addons-368879" exists ...
	I1126 19:35:41.532096   15626 cli_runner.go:164] Run: docker container inspect addons-368879 --format={{.State.Status}}
	I1126 19:35:41.526204   15626 cli_runner.go:164] Run: docker container inspect addons-368879 --format={{.State.Status}}
	I1126 19:35:41.532869   15626 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 19:35:41.525805   15626 host.go:66] Checking if "addons-368879" exists ...
	I1126 19:35:41.536791   15626 cli_runner.go:164] Run: docker container inspect addons-368879 --format={{.State.Status}}
	I1126 19:35:41.537776   15626 cli_runner.go:164] Run: docker container inspect addons-368879 --format={{.State.Status}}
	I1126 19:35:41.575857   15626 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1126 19:35:41.577017   15626 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1126 19:35:41.577045   15626 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1126 19:35:41.577107   15626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-368879
	I1126 19:35:41.586076   15626 addons.go:239] Setting addon default-storageclass=true in "addons-368879"
	I1126 19:35:41.586137   15626 host.go:66] Checking if "addons-368879" exists ...
	I1126 19:35:41.589112   15626 cli_runner.go:164] Run: docker container inspect addons-368879 --format={{.State.Status}}
	I1126 19:35:41.601221   15626 host.go:66] Checking if "addons-368879" exists ...
	I1126 19:35:41.602929   15626 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1126 19:35:41.603321   15626 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1126 19:35:41.605039   15626 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1126 19:35:41.605258   15626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1126 19:35:41.605164   15626 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1126 19:35:41.605872   15626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-368879
	I1126 19:35:41.607835   15626 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1126 19:35:41.609251   15626 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1126 19:35:41.609268   15626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1126 19:35:41.609316   15626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-368879
	W1126 19:35:41.611251   15626 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1126 19:35:41.618546   15626 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1126 19:35:41.622004   15626 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 19:35:41.622025   15626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1126 19:35:41.622095   15626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-368879
	I1126 19:35:41.629300   15626 out.go:179]   - Using image docker.io/registry:3.0.0
	I1126 19:35:41.629314   15626 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1126 19:35:41.630589   15626 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1126 19:35:41.630728   15626 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1126 19:35:41.630744   15626 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1126 19:35:41.631368   15626 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1126 19:35:41.631708   15626 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1126 19:35:41.632567   15626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1126 19:35:41.632305   15626 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1126 19:35:41.632723   15626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-368879
	I1126 19:35:41.633678   15626 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1126 19:35:41.636994   15626 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1126 19:35:41.637025   15626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1126 19:35:41.637078   15626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-368879
	I1126 19:35:41.638840   15626 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1126 19:35:41.638854   15626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1126 19:35:41.638911   15626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-368879
	I1126 19:35:41.639145   15626 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1126 19:35:41.640513   15626 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1126 19:35:41.640538   15626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1126 19:35:41.640597   15626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-368879
	I1126 19:35:41.640662   15626 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1126 19:35:41.640876   15626 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1126 19:35:41.640891   15626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1126 19:35:41.640957   15626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-368879
	I1126 19:35:41.640676   15626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1126 19:35:41.641623   15626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-368879
	I1126 19:35:41.642555   15626 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1126 19:35:41.643692   15626 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1126 19:35:41.643707   15626 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1126 19:35:41.643751   15626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-368879
	I1126 19:35:41.644365   15626 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-368879"
	I1126 19:35:41.644499   15626 host.go:66] Checking if "addons-368879" exists ...
	I1126 19:35:41.645213   15626 cli_runner.go:164] Run: docker container inspect addons-368879 --format={{.State.Status}}
	I1126 19:35:41.648590   15626 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1126 19:35:41.650018   15626 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1126 19:35:41.651618   15626 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1126 19:35:41.651671   15626 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1126 19:35:41.653608   15626 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1126 19:35:41.653626   15626 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1126 19:35:41.653688   15626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-368879
	I1126 19:35:41.653975   15626 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1126 19:35:41.655339   15626 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1126 19:35:41.656673   15626 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1126 19:35:41.657696   15626 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1126 19:35:41.657757   15626 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1126 19:35:41.657865   15626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-368879
	I1126 19:35:41.673057   15626 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1126 19:35:41.673087   15626 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1126 19:35:41.673143   15626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-368879
	I1126 19:35:41.675538   15626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/addons-368879/id_rsa Username:docker}
	I1126 19:35:41.680845   15626 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1126 19:35:41.682666   15626 out.go:179]   - Using image docker.io/busybox:stable
	I1126 19:35:41.684594   15626 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1126 19:35:41.684614   15626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1126 19:35:41.684693   15626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-368879
	I1126 19:35:41.689931   15626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/addons-368879/id_rsa Username:docker}
	I1126 19:35:41.690120   15626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/addons-368879/id_rsa Username:docker}
	I1126 19:35:41.696180   15626 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1126 19:35:41.702159   15626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/addons-368879/id_rsa Username:docker}
	I1126 19:35:41.705226   15626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/addons-368879/id_rsa Username:docker}
	I1126 19:35:41.707830   15626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/addons-368879/id_rsa Username:docker}
	I1126 19:35:41.711616   15626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/addons-368879/id_rsa Username:docker}
	I1126 19:35:41.724718   15626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/addons-368879/id_rsa Username:docker}
	I1126 19:35:41.726279   15626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/addons-368879/id_rsa Username:docker}
	I1126 19:35:41.730669   15626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/addons-368879/id_rsa Username:docker}
	I1126 19:35:41.730706   15626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/addons-368879/id_rsa Username:docker}
	I1126 19:35:41.732979   15626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/addons-368879/id_rsa Username:docker}
	I1126 19:35:41.737718   15626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/addons-368879/id_rsa Username:docker}
	I1126 19:35:41.740608   15626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/addons-368879/id_rsa Username:docker}
	I1126 19:35:41.740639   15626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/addons-368879/id_rsa Username:docker}
	I1126 19:35:41.744090   15626 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W1126 19:35:41.745122   15626 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1126 19:35:41.745152   15626 retry.go:31] will retry after 250.837123ms: ssh: handshake failed: EOF
	I1126 19:35:41.859298   15626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1126 19:35:41.863405   15626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1126 19:35:41.880200   15626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1126 19:35:41.880349   15626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1126 19:35:41.882972   15626 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1126 19:35:41.882991   15626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1126 19:35:41.892625   15626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1126 19:35:41.901082   15626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1126 19:35:41.903639   15626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1126 19:35:41.920968   15626 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1126 19:35:41.920994   15626 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1126 19:35:41.922173   15626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 19:35:41.925567   15626 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1126 19:35:41.925586   15626 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1126 19:35:41.927214   15626 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1126 19:35:41.927229   15626 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1126 19:35:41.929728   15626 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1126 19:35:41.929742   15626 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1126 19:35:41.936236   15626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1126 19:35:41.952487   15626 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1126 19:35:41.952594   15626 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1126 19:35:41.960017   15626 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1126 19:35:41.960035   15626 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1126 19:35:41.972750   15626 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1126 19:35:41.972824   15626 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1126 19:35:41.983234   15626 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1126 19:35:41.983312   15626 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1126 19:35:41.985721   15626 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1126 19:35:41.985741   15626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1126 19:35:42.000234   15626 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1126 19:35:42.000283   15626 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1126 19:35:42.003342   15626 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1126 19:35:42.003383   15626 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1126 19:35:42.011058   15626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1126 19:35:42.021359   15626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1126 19:35:42.023885   15626 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1126 19:35:42.023905   15626 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1126 19:35:42.035940   15626 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1126 19:35:42.035965   15626 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1126 19:35:42.040023   15626 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1126 19:35:42.041239   15626 node_ready.go:35] waiting up to 6m0s for node "addons-368879" to be "Ready" ...
	I1126 19:35:42.067391   15626 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1126 19:35:42.067421   15626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1126 19:35:42.123911   15626 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1126 19:35:42.123956   15626 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1126 19:35:42.125038   15626 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1126 19:35:42.125057   15626 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1126 19:35:42.139172   15626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1126 19:35:42.175932   15626 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1126 19:35:42.175950   15626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1126 19:35:42.203436   15626 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1126 19:35:42.203479   15626 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1126 19:35:42.238076   15626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1126 19:35:42.249780   15626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1126 19:35:42.258021   15626 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1126 19:35:42.258105   15626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1126 19:35:42.295828   15626 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1126 19:35:42.295852   15626 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1126 19:35:42.371238   15626 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1126 19:35:42.371332   15626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1126 19:35:42.418156   15626 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1126 19:35:42.418222   15626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1126 19:35:42.449707   15626 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1126 19:35:42.449730   15626 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1126 19:35:42.501117   15626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1126 19:35:42.545578   15626 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-368879" context rescaled to 1 replicas
	W1126 19:35:42.771206   15626 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1126 19:35:43.044818   15626 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.143701384s)
	I1126 19:35:43.044854   15626 addons.go:495] Verifying addon ingress=true in "addons-368879"
	I1126 19:35:43.044874   15626 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.14120073s)
	I1126 19:35:43.044965   15626 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.108711312s)
	I1126 19:35:43.044936   15626 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.122734533s)
	I1126 19:35:43.045051   15626 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.033960058s)
	I1126 19:35:43.045065   15626 addons.go:495] Verifying addon registry=true in "addons-368879"
	I1126 19:35:43.045122   15626 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.023730881s)
	I1126 19:35:43.045139   15626 addons.go:495] Verifying addon metrics-server=true in "addons-368879"
	I1126 19:35:43.048586   15626 out.go:179] * Verifying ingress addon...
	I1126 19:35:43.048589   15626 out.go:179] * Verifying registry addon...
	I1126 19:35:43.049243   15626 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-368879 service yakd-dashboard -n yakd-dashboard
	
	I1126 19:35:43.050779   15626 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1126 19:35:43.051369   15626 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1126 19:35:43.052962   15626 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1126 19:35:43.053115   15626 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1126 19:35:43.053133   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:35:43.506818   15626 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.268702461s)
	W1126 19:35:43.506868   15626 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1126 19:35:43.506891   15626 retry.go:31] will retry after 350.045154ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1126 19:35:43.506904   15626 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (1.257008406s)
	I1126 19:35:43.507128   15626 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.005904803s)
	I1126 19:35:43.507155   15626 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-368879"
	I1126 19:35:43.508894   15626 out.go:179] * Verifying csi-hostpath-driver addon...
	I1126 19:35:43.511110   15626 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1126 19:35:43.513016   15626 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1126 19:35:43.513037   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:35:43.553529   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:35:43.553667   15626 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1126 19:35:43.553681   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:35:43.857191   15626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1126 19:35:44.014631   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1126 19:35:44.043438   15626 node_ready.go:57] node "addons-368879" has "Ready":"False" status (will retry)
	I1126 19:35:44.114795   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:35:44.114909   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:35:44.514411   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:35:44.553166   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:35:44.553295   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:35:45.013445   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:35:45.113802   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:35:45.113934   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:35:45.513774   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:35:45.553156   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:35:45.553315   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:35:46.014176   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:35:46.054390   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:35:46.054582   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:35:46.268954   15626 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.411722541s)
	I1126 19:35:46.514197   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1126 19:35:46.543856   15626 node_ready.go:57] node "addons-368879" has "Ready":"False" status (will retry)
	I1126 19:35:46.552425   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:35:46.553422   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:35:47.014742   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:35:47.115446   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:35:47.115704   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:35:47.514112   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:35:47.552870   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:35:47.553774   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:35:48.014246   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:35:48.114730   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:35:48.114805   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:35:48.514427   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:35:48.553146   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:35:48.553351   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:35:49.013868   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1126 19:35:49.043073   15626 node_ready.go:57] node "addons-368879" has "Ready":"False" status (will retry)
	I1126 19:35:49.114965   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:35:49.115152   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:35:49.227789   15626 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1126 19:35:49.227846   15626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-368879
	I1126 19:35:49.245437   15626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/addons-368879/id_rsa Username:docker}
	I1126 19:35:49.353171   15626 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1126 19:35:49.364868   15626 addons.go:239] Setting addon gcp-auth=true in "addons-368879"
	I1126 19:35:49.364922   15626 host.go:66] Checking if "addons-368879" exists ...
	I1126 19:35:49.365244   15626 cli_runner.go:164] Run: docker container inspect addons-368879 --format={{.State.Status}}
	I1126 19:35:49.381685   15626 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1126 19:35:49.381739   15626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-368879
	I1126 19:35:49.397919   15626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/addons-368879/id_rsa Username:docker}
	I1126 19:35:49.491643   15626 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1126 19:35:49.492895   15626 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1126 19:35:49.493963   15626 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1126 19:35:49.493976   15626 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1126 19:35:49.505819   15626 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1126 19:35:49.505834   15626 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1126 19:35:49.514328   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:35:49.517692   15626 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1126 19:35:49.517705   15626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1126 19:35:49.529807   15626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1126 19:35:49.553557   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:35:49.553727   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:35:49.807417   15626 addons.go:495] Verifying addon gcp-auth=true in "addons-368879"
	I1126 19:35:49.810495   15626 out.go:179] * Verifying gcp-auth addon...
	I1126 19:35:49.812147   15626 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1126 19:35:49.815943   15626 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1126 19:35:49.815966   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:35:50.014318   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:35:50.053119   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:35:50.053926   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:35:50.315264   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:35:50.513726   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:35:50.553349   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:35:50.553443   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:35:50.814802   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:35:51.014286   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1126 19:35:51.044050   15626 node_ready.go:57] node "addons-368879" has "Ready":"False" status (will retry)
	I1126 19:35:51.052842   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:35:51.054024   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:35:51.315372   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:35:51.513950   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:35:51.552949   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:35:51.553870   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:35:51.815499   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:35:52.013581   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:35:52.053266   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:35:52.053499   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:35:52.314785   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:35:52.514331   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:35:52.552804   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:35:52.553628   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:35:52.815246   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:35:53.014033   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:35:53.053712   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:35:53.054351   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:35:53.315244   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:35:53.513606   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1126 19:35:53.543377   15626 node_ready.go:57] node "addons-368879" has "Ready":"False" status (will retry)
	I1126 19:35:53.553293   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:35:53.553409   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:35:53.814720   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:35:54.014199   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:35:54.053020   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:35:54.053868   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:35:54.315278   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:35:54.513582   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:35:54.553270   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:35:54.553270   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:35:54.814393   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:35:55.014208   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:35:55.053360   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:35:55.053600   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:35:55.315102   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:35:55.513645   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1126 19:35:55.543514   15626 node_ready.go:57] node "addons-368879" has "Ready":"False" status (will retry)
	I1126 19:35:55.553145   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:35:55.553360   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:35:55.814530   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:35:56.013819   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:35:56.053532   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:35:56.053687   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:35:56.314873   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:35:56.514501   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:35:56.553011   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:35:56.554005   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:35:56.815508   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:35:57.014055   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:35:57.052901   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:35:57.053837   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:35:57.315231   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:35:57.513572   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:35:57.552947   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:35:57.553148   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:35:57.814396   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:35:58.013768   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1126 19:35:58.043645   15626 node_ready.go:57] node "addons-368879" has "Ready":"False" status (will retry)
	I1126 19:35:58.053395   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:35:58.053528   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:35:58.314948   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:35:58.514381   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:35:58.553077   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:35:58.553156   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:35:58.814530   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:35:59.014276   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:35:59.053312   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:35:59.053479   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:35:59.314755   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:35:59.513947   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:35:59.553494   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:35:59.553609   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:35:59.814721   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:00.014116   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1126 19:36:00.044049   15626 node_ready.go:57] node "addons-368879" has "Ready":"False" status (will retry)
	I1126 19:36:00.053062   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:00.053911   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:00.315095   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:00.513425   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:00.553206   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:00.553395   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:00.814474   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:01.014015   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:01.053482   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:01.053644   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:01.314926   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:01.514368   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:01.552928   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:01.553964   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:01.814411   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:02.013794   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:02.053575   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:02.053754   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:02.315133   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:02.513355   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1126 19:36:02.543140   15626 node_ready.go:57] node "addons-368879" has "Ready":"False" status (will retry)
	I1126 19:36:02.553094   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:02.553933   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:02.814275   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:03.013909   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:03.053442   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:03.053594   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:03.315189   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:03.513293   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:03.553002   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:03.553768   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:03.815004   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:04.013175   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:04.052782   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:04.053693   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:04.315278   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:04.513586   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1126 19:36:04.543323   15626 node_ready.go:57] node "addons-368879" has "Ready":"False" status (will retry)
	I1126 19:36:04.553115   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:04.553165   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:04.814321   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:05.013760   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:05.053493   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:05.053745   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:05.315488   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:05.513815   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:05.553419   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:05.553639   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:05.814790   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:06.014191   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:06.053075   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:06.053944   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:06.315518   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:06.514696   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1126 19:36:06.543535   15626 node_ready.go:57] node "addons-368879" has "Ready":"False" status (will retry)
	I1126 19:36:06.553421   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:06.553450   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:06.814958   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:07.014668   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:07.053535   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:07.053684   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:07.315031   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:07.513550   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:07.553331   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:07.553520   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:07.814708   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:08.014258   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:08.053218   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:08.053408   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:08.314745   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:08.514248   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1126 19:36:08.544085   15626 node_ready.go:57] node "addons-368879" has "Ready":"False" status (will retry)
	I1126 19:36:08.552904   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:08.553951   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:08.815171   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:09.013500   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:09.053112   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:09.053221   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:09.315215   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:09.513432   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:09.553367   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:09.553564   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:09.814866   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:10.014231   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:10.052904   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:10.053775   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:10.315266   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:10.513553   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:10.553038   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:10.553105   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:10.814398   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:11.013706   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1126 19:36:11.043547   15626 node_ready.go:57] node "addons-368879" has "Ready":"False" status (will retry)
	I1126 19:36:11.053313   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:11.053441   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:11.314745   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:11.514168   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:11.552684   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:11.553638   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:11.815253   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:12.013556   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:12.053344   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:12.053400   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:12.314887   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:12.514271   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:12.553032   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:12.553958   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:12.815303   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:13.013726   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1126 19:36:13.043741   15626 node_ready.go:57] node "addons-368879" has "Ready":"False" status (will retry)
	I1126 19:36:13.053557   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:13.053763   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:13.315049   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:13.514237   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:13.552624   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:13.553529   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:13.814651   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:14.013880   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:14.053866   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:14.054037   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:14.315368   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:14.513657   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:14.553448   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:14.553674   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:14.814793   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:15.014420   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:15.053224   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:15.053441   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:15.314984   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:15.514212   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1126 19:36:15.544001   15626 node_ready.go:57] node "addons-368879" has "Ready":"False" status (will retry)
	I1126 19:36:15.552756   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:15.553774   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:15.814964   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:16.014334   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:16.053333   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:16.054186   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:16.314475   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:16.514573   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:16.553364   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:16.554142   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:16.814637   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:17.014033   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:17.053070   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:17.053885   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:17.315298   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:17.513829   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:17.553376   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:17.553574   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:17.814921   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:18.014180   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1126 19:36:18.044058   15626 node_ready.go:57] node "addons-368879" has "Ready":"False" status (will retry)
	I1126 19:36:18.053068   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:18.053873   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:18.315198   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:18.513511   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:18.553565   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:18.553640   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:18.814931   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:19.014207   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:19.052940   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:19.053845   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:19.314397   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:19.513870   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:19.553678   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:19.553872   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:19.814215   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:20.013362   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:20.053137   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:20.053320   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:20.314445   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:20.513892   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1126 19:36:20.543683   15626 node_ready.go:57] node "addons-368879" has "Ready":"False" status (will retry)
	I1126 19:36:20.553474   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:20.553680   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:20.815054   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:21.014281   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:21.053214   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:21.053240   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:21.314429   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:21.513747   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:21.553659   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:21.553716   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:21.815284   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:22.013649   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:22.053383   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:22.053637   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:22.314908   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:22.514386   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:22.552868   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:22.553955   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:22.815452   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:23.014039   15626 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1126 19:36:23.014057   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:23.045484   15626 node_ready.go:49] node "addons-368879" is "Ready"
	I1126 19:36:23.045513   15626 node_ready.go:38] duration metric: took 41.004255143s for node "addons-368879" to be "Ready" ...
	I1126 19:36:23.045528   15626 api_server.go:52] waiting for apiserver process to appear ...
	I1126 19:36:23.045582   15626 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 19:36:23.053115   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:23.053955   15626 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1126 19:36:23.053973   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:23.060946   15626 api_server.go:72] duration metric: took 41.535409883s to wait for apiserver process to appear ...
	I1126 19:36:23.060966   15626 api_server.go:88] waiting for apiserver healthz status ...
	I1126 19:36:23.060987   15626 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1126 19:36:23.064957   15626 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1126 19:36:23.065791   15626 api_server.go:141] control plane version: v1.34.1
	I1126 19:36:23.065824   15626 api_server.go:131] duration metric: took 4.85025ms to wait for apiserver health ...
	I1126 19:36:23.065835   15626 system_pods.go:43] waiting for kube-system pods to appear ...
	I1126 19:36:23.068558   15626 system_pods.go:59] 20 kube-system pods found
	I1126 19:36:23.068584   15626 system_pods.go:61] "amd-gpu-device-plugin-gj5pg" [5041bbf6-8005-42bc-8769-90708d6711d1] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1126 19:36:23.068591   15626 system_pods.go:61] "coredns-66bc5c9577-rv6zq" [fda827e5-458e-47e4-856b-f300a1b580aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 19:36:23.068598   15626 system_pods.go:61] "csi-hostpath-attacher-0" [3ce365e2-f149-4dfe-8c32-92f49c9d6157] Pending
	I1126 19:36:23.068603   15626 system_pods.go:61] "csi-hostpath-resizer-0" [8a26132b-ecf3-46aa-aeea-30a0a1aaf322] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1126 19:36:23.068610   15626 system_pods.go:61] "csi-hostpathplugin-4cdfn" [dd181b66-dc21-4179-9580-ca4f0f403bb9] Pending
	I1126 19:36:23.068614   15626 system_pods.go:61] "etcd-addons-368879" [8e5bd70d-0fd3-492d-b6b2-d2c4e787b008] Running
	I1126 19:36:23.068617   15626 system_pods.go:61] "kindnet-dqhsm" [0e21747b-887f-42ba-946a-ce6d8aaaf19a] Running
	I1126 19:36:23.068620   15626 system_pods.go:61] "kube-apiserver-addons-368879" [688a9861-a405-428e-b035-60d971e9c639] Running
	I1126 19:36:23.068626   15626 system_pods.go:61] "kube-controller-manager-addons-368879" [f6dd02aa-10ac-42fc-b22c-e403a4be5178] Running
	I1126 19:36:23.068632   15626 system_pods.go:61] "kube-ingress-dns-minikube" [78401885-fa6d-4da9-8f6b-90e399e33134] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1126 19:36:23.068637   15626 system_pods.go:61] "kube-proxy-jvtzp" [4315ba8f-bd42-4af5-99ad-333bfd119723] Running
	I1126 19:36:23.068641   15626 system_pods.go:61] "kube-scheduler-addons-368879" [53644990-e6e9-4c0b-948c-60ddae17c928] Running
	I1126 19:36:23.068648   15626 system_pods.go:61] "metrics-server-85b7d694d7-mnzc2" [ef15070b-c3d9-4b20-b2b3-e53708b4a9a6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1126 19:36:23.068651   15626 system_pods.go:61] "nvidia-device-plugin-daemonset-jr6zz" [3a661477-c629-4de7-b047-25a87d532f45] Pending
	I1126 19:36:23.068659   15626 system_pods.go:61] "registry-6b586f9694-4kzdl" [09b5cb7a-fb98-4680-a679-8c1716e1f038] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1126 19:36:23.068663   15626 system_pods.go:61] "registry-creds-764b6fb674-rspjs" [3fc9a944-7747-4626-b4ca-0f2e048703fd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1126 19:36:23.068670   15626 system_pods.go:61] "registry-proxy-lcrcc" [62d90aca-8e64-4089-83fd-bc39957164ba] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1126 19:36:23.068676   15626 system_pods.go:61] "snapshot-controller-7d9fbc56b8-2lfj6" [a495ae59-667b-4e3b-914d-f8f925d11a5d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1126 19:36:23.068684   15626 system_pods.go:61] "snapshot-controller-7d9fbc56b8-4kvzv" [f3968d51-d680-46be-bb2a-7fa01e497530] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1126 19:36:23.068689   15626 system_pods.go:61] "storage-provisioner" [e959d9e7-8cb7-4ca2-963a-7a0c7d416565] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 19:36:23.068697   15626 system_pods.go:74] duration metric: took 2.857163ms to wait for pod list to return data ...
	I1126 19:36:23.068706   15626 default_sa.go:34] waiting for default service account to be created ...
	I1126 19:36:23.070213   15626 default_sa.go:45] found service account: "default"
	I1126 19:36:23.070228   15626 default_sa.go:55] duration metric: took 1.517275ms for default service account to be created ...
	I1126 19:36:23.070235   15626 system_pods.go:116] waiting for k8s-apps to be running ...
	I1126 19:36:23.072883   15626 system_pods.go:86] 20 kube-system pods found
	I1126 19:36:23.072906   15626 system_pods.go:89] "amd-gpu-device-plugin-gj5pg" [5041bbf6-8005-42bc-8769-90708d6711d1] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1126 19:36:23.072912   15626 system_pods.go:89] "coredns-66bc5c9577-rv6zq" [fda827e5-458e-47e4-856b-f300a1b580aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 19:36:23.072919   15626 system_pods.go:89] "csi-hostpath-attacher-0" [3ce365e2-f149-4dfe-8c32-92f49c9d6157] Pending
	I1126 19:36:23.072924   15626 system_pods.go:89] "csi-hostpath-resizer-0" [8a26132b-ecf3-46aa-aeea-30a0a1aaf322] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1126 19:36:23.072928   15626 system_pods.go:89] "csi-hostpathplugin-4cdfn" [dd181b66-dc21-4179-9580-ca4f0f403bb9] Pending
	I1126 19:36:23.072931   15626 system_pods.go:89] "etcd-addons-368879" [8e5bd70d-0fd3-492d-b6b2-d2c4e787b008] Running
	I1126 19:36:23.072935   15626 system_pods.go:89] "kindnet-dqhsm" [0e21747b-887f-42ba-946a-ce6d8aaaf19a] Running
	I1126 19:36:23.072942   15626 system_pods.go:89] "kube-apiserver-addons-368879" [688a9861-a405-428e-b035-60d971e9c639] Running
	I1126 19:36:23.072948   15626 system_pods.go:89] "kube-controller-manager-addons-368879" [f6dd02aa-10ac-42fc-b22c-e403a4be5178] Running
	I1126 19:36:23.072953   15626 system_pods.go:89] "kube-ingress-dns-minikube" [78401885-fa6d-4da9-8f6b-90e399e33134] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1126 19:36:23.072957   15626 system_pods.go:89] "kube-proxy-jvtzp" [4315ba8f-bd42-4af5-99ad-333bfd119723] Running
	I1126 19:36:23.072961   15626 system_pods.go:89] "kube-scheduler-addons-368879" [53644990-e6e9-4c0b-948c-60ddae17c928] Running
	I1126 19:36:23.072966   15626 system_pods.go:89] "metrics-server-85b7d694d7-mnzc2" [ef15070b-c3d9-4b20-b2b3-e53708b4a9a6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1126 19:36:23.072971   15626 system_pods.go:89] "nvidia-device-plugin-daemonset-jr6zz" [3a661477-c629-4de7-b047-25a87d532f45] Pending
	I1126 19:36:23.072976   15626 system_pods.go:89] "registry-6b586f9694-4kzdl" [09b5cb7a-fb98-4680-a679-8c1716e1f038] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1126 19:36:23.072980   15626 system_pods.go:89] "registry-creds-764b6fb674-rspjs" [3fc9a944-7747-4626-b4ca-0f2e048703fd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1126 19:36:23.072984   15626 system_pods.go:89] "registry-proxy-lcrcc" [62d90aca-8e64-4089-83fd-bc39957164ba] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1126 19:36:23.072990   15626 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2lfj6" [a495ae59-667b-4e3b-914d-f8f925d11a5d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1126 19:36:23.072997   15626 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4kvzv" [f3968d51-d680-46be-bb2a-7fa01e497530] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1126 19:36:23.073008   15626 system_pods.go:89] "storage-provisioner" [e959d9e7-8cb7-4ca2-963a-7a0c7d416565] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 19:36:23.073021   15626 retry.go:31] will retry after 225.984763ms: missing components: kube-dns
	I1126 19:36:23.304100   15626 system_pods.go:86] 20 kube-system pods found
	I1126 19:36:23.304134   15626 system_pods.go:89] "amd-gpu-device-plugin-gj5pg" [5041bbf6-8005-42bc-8769-90708d6711d1] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1126 19:36:23.304144   15626 system_pods.go:89] "coredns-66bc5c9577-rv6zq" [fda827e5-458e-47e4-856b-f300a1b580aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 19:36:23.304154   15626 system_pods.go:89] "csi-hostpath-attacher-0" [3ce365e2-f149-4dfe-8c32-92f49c9d6157] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1126 19:36:23.304161   15626 system_pods.go:89] "csi-hostpath-resizer-0" [8a26132b-ecf3-46aa-aeea-30a0a1aaf322] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1126 19:36:23.304170   15626 system_pods.go:89] "csi-hostpathplugin-4cdfn" [dd181b66-dc21-4179-9580-ca4f0f403bb9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1126 19:36:23.304180   15626 system_pods.go:89] "etcd-addons-368879" [8e5bd70d-0fd3-492d-b6b2-d2c4e787b008] Running
	I1126 19:36:23.304188   15626 system_pods.go:89] "kindnet-dqhsm" [0e21747b-887f-42ba-946a-ce6d8aaaf19a] Running
	I1126 19:36:23.304197   15626 system_pods.go:89] "kube-apiserver-addons-368879" [688a9861-a405-428e-b035-60d971e9c639] Running
	I1126 19:36:23.304202   15626 system_pods.go:89] "kube-controller-manager-addons-368879" [f6dd02aa-10ac-42fc-b22c-e403a4be5178] Running
	I1126 19:36:23.304213   15626 system_pods.go:89] "kube-ingress-dns-minikube" [78401885-fa6d-4da9-8f6b-90e399e33134] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1126 19:36:23.304218   15626 system_pods.go:89] "kube-proxy-jvtzp" [4315ba8f-bd42-4af5-99ad-333bfd119723] Running
	I1126 19:36:23.304230   15626 system_pods.go:89] "kube-scheduler-addons-368879" [53644990-e6e9-4c0b-948c-60ddae17c928] Running
	I1126 19:36:23.304238   15626 system_pods.go:89] "metrics-server-85b7d694d7-mnzc2" [ef15070b-c3d9-4b20-b2b3-e53708b4a9a6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1126 19:36:23.304250   15626 system_pods.go:89] "nvidia-device-plugin-daemonset-jr6zz" [3a661477-c629-4de7-b047-25a87d532f45] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1126 19:36:23.304260   15626 system_pods.go:89] "registry-6b586f9694-4kzdl" [09b5cb7a-fb98-4680-a679-8c1716e1f038] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1126 19:36:23.304273   15626 system_pods.go:89] "registry-creds-764b6fb674-rspjs" [3fc9a944-7747-4626-b4ca-0f2e048703fd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1126 19:36:23.304283   15626 system_pods.go:89] "registry-proxy-lcrcc" [62d90aca-8e64-4089-83fd-bc39957164ba] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1126 19:36:23.304297   15626 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2lfj6" [a495ae59-667b-4e3b-914d-f8f925d11a5d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1126 19:36:23.304311   15626 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4kvzv" [f3968d51-d680-46be-bb2a-7fa01e497530] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1126 19:36:23.304326   15626 system_pods.go:89] "storage-provisioner" [e959d9e7-8cb7-4ca2-963a-7a0c7d416565] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 19:36:23.304348   15626 retry.go:31] will retry after 284.583109ms: missing components: kube-dns
	I1126 19:36:23.402386   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:23.513891   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:23.554235   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:23.554251   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:23.616167   15626 system_pods.go:86] 20 kube-system pods found
	I1126 19:36:23.616193   15626 system_pods.go:89] "amd-gpu-device-plugin-gj5pg" [5041bbf6-8005-42bc-8769-90708d6711d1] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1126 19:36:23.616202   15626 system_pods.go:89] "coredns-66bc5c9577-rv6zq" [fda827e5-458e-47e4-856b-f300a1b580aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 19:36:23.616208   15626 system_pods.go:89] "csi-hostpath-attacher-0" [3ce365e2-f149-4dfe-8c32-92f49c9d6157] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1126 19:36:23.616214   15626 system_pods.go:89] "csi-hostpath-resizer-0" [8a26132b-ecf3-46aa-aeea-30a0a1aaf322] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1126 19:36:23.616219   15626 system_pods.go:89] "csi-hostpathplugin-4cdfn" [dd181b66-dc21-4179-9580-ca4f0f403bb9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1126 19:36:23.616223   15626 system_pods.go:89] "etcd-addons-368879" [8e5bd70d-0fd3-492d-b6b2-d2c4e787b008] Running
	I1126 19:36:23.616229   15626 system_pods.go:89] "kindnet-dqhsm" [0e21747b-887f-42ba-946a-ce6d8aaaf19a] Running
	I1126 19:36:23.616233   15626 system_pods.go:89] "kube-apiserver-addons-368879" [688a9861-a405-428e-b035-60d971e9c639] Running
	I1126 19:36:23.616237   15626 system_pods.go:89] "kube-controller-manager-addons-368879" [f6dd02aa-10ac-42fc-b22c-e403a4be5178] Running
	I1126 19:36:23.616246   15626 system_pods.go:89] "kube-ingress-dns-minikube" [78401885-fa6d-4da9-8f6b-90e399e33134] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1126 19:36:23.616250   15626 system_pods.go:89] "kube-proxy-jvtzp" [4315ba8f-bd42-4af5-99ad-333bfd119723] Running
	I1126 19:36:23.616254   15626 system_pods.go:89] "kube-scheduler-addons-368879" [53644990-e6e9-4c0b-948c-60ddae17c928] Running
	I1126 19:36:23.616258   15626 system_pods.go:89] "metrics-server-85b7d694d7-mnzc2" [ef15070b-c3d9-4b20-b2b3-e53708b4a9a6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1126 19:36:23.616263   15626 system_pods.go:89] "nvidia-device-plugin-daemonset-jr6zz" [3a661477-c629-4de7-b047-25a87d532f45] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1126 19:36:23.616274   15626 system_pods.go:89] "registry-6b586f9694-4kzdl" [09b5cb7a-fb98-4680-a679-8c1716e1f038] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1126 19:36:23.616282   15626 system_pods.go:89] "registry-creds-764b6fb674-rspjs" [3fc9a944-7747-4626-b4ca-0f2e048703fd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1126 19:36:23.616293   15626 system_pods.go:89] "registry-proxy-lcrcc" [62d90aca-8e64-4089-83fd-bc39957164ba] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1126 19:36:23.616303   15626 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2lfj6" [a495ae59-667b-4e3b-914d-f8f925d11a5d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1126 19:36:23.616314   15626 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4kvzv" [f3968d51-d680-46be-bb2a-7fa01e497530] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1126 19:36:23.616323   15626 system_pods.go:89] "storage-provisioner" [e959d9e7-8cb7-4ca2-963a-7a0c7d416565] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 19:36:23.616339   15626 retry.go:31] will retry after 333.768916ms: missing components: kube-dns
	I1126 19:36:23.815834   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:23.954773   15626 system_pods.go:86] 20 kube-system pods found
	I1126 19:36:23.954803   15626 system_pods.go:89] "amd-gpu-device-plugin-gj5pg" [5041bbf6-8005-42bc-8769-90708d6711d1] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1126 19:36:23.954812   15626 system_pods.go:89] "coredns-66bc5c9577-rv6zq" [fda827e5-458e-47e4-856b-f300a1b580aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 19:36:23.954818   15626 system_pods.go:89] "csi-hostpath-attacher-0" [3ce365e2-f149-4dfe-8c32-92f49c9d6157] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1126 19:36:23.954823   15626 system_pods.go:89] "csi-hostpath-resizer-0" [8a26132b-ecf3-46aa-aeea-30a0a1aaf322] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1126 19:36:23.954829   15626 system_pods.go:89] "csi-hostpathplugin-4cdfn" [dd181b66-dc21-4179-9580-ca4f0f403bb9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1126 19:36:23.954834   15626 system_pods.go:89] "etcd-addons-368879" [8e5bd70d-0fd3-492d-b6b2-d2c4e787b008] Running
	I1126 19:36:23.954838   15626 system_pods.go:89] "kindnet-dqhsm" [0e21747b-887f-42ba-946a-ce6d8aaaf19a] Running
	I1126 19:36:23.954842   15626 system_pods.go:89] "kube-apiserver-addons-368879" [688a9861-a405-428e-b035-60d971e9c639] Running
	I1126 19:36:23.954848   15626 system_pods.go:89] "kube-controller-manager-addons-368879" [f6dd02aa-10ac-42fc-b22c-e403a4be5178] Running
	I1126 19:36:23.954853   15626 system_pods.go:89] "kube-ingress-dns-minikube" [78401885-fa6d-4da9-8f6b-90e399e33134] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1126 19:36:23.954860   15626 system_pods.go:89] "kube-proxy-jvtzp" [4315ba8f-bd42-4af5-99ad-333bfd119723] Running
	I1126 19:36:23.954863   15626 system_pods.go:89] "kube-scheduler-addons-368879" [53644990-e6e9-4c0b-948c-60ddae17c928] Running
	I1126 19:36:23.954868   15626 system_pods.go:89] "metrics-server-85b7d694d7-mnzc2" [ef15070b-c3d9-4b20-b2b3-e53708b4a9a6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1126 19:36:23.954873   15626 system_pods.go:89] "nvidia-device-plugin-daemonset-jr6zz" [3a661477-c629-4de7-b047-25a87d532f45] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1126 19:36:23.954880   15626 system_pods.go:89] "registry-6b586f9694-4kzdl" [09b5cb7a-fb98-4680-a679-8c1716e1f038] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1126 19:36:23.954885   15626 system_pods.go:89] "registry-creds-764b6fb674-rspjs" [3fc9a944-7747-4626-b4ca-0f2e048703fd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1126 19:36:23.954893   15626 system_pods.go:89] "registry-proxy-lcrcc" [62d90aca-8e64-4089-83fd-bc39957164ba] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1126 19:36:23.954898   15626 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2lfj6" [a495ae59-667b-4e3b-914d-f8f925d11a5d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1126 19:36:23.954906   15626 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4kvzv" [f3968d51-d680-46be-bb2a-7fa01e497530] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1126 19:36:23.954911   15626 system_pods.go:89] "storage-provisioner" [e959d9e7-8cb7-4ca2-963a-7a0c7d416565] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 19:36:23.954927   15626 retry.go:31] will retry after 606.877014ms: missing components: kube-dns
	I1126 19:36:24.013633   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:24.054647   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:24.054764   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:24.315107   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:24.514383   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:24.553847   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:24.554382   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:24.565987   15626 system_pods.go:86] 20 kube-system pods found
	I1126 19:36:24.566017   15626 system_pods.go:89] "amd-gpu-device-plugin-gj5pg" [5041bbf6-8005-42bc-8769-90708d6711d1] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1126 19:36:24.566027   15626 system_pods.go:89] "coredns-66bc5c9577-rv6zq" [fda827e5-458e-47e4-856b-f300a1b580aa] Running
	I1126 19:36:24.566038   15626 system_pods.go:89] "csi-hostpath-attacher-0" [3ce365e2-f149-4dfe-8c32-92f49c9d6157] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1126 19:36:24.566049   15626 system_pods.go:89] "csi-hostpath-resizer-0" [8a26132b-ecf3-46aa-aeea-30a0a1aaf322] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1126 19:36:24.566057   15626 system_pods.go:89] "csi-hostpathplugin-4cdfn" [dd181b66-dc21-4179-9580-ca4f0f403bb9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1126 19:36:24.566066   15626 system_pods.go:89] "etcd-addons-368879" [8e5bd70d-0fd3-492d-b6b2-d2c4e787b008] Running
	I1126 19:36:24.566075   15626 system_pods.go:89] "kindnet-dqhsm" [0e21747b-887f-42ba-946a-ce6d8aaaf19a] Running
	I1126 19:36:24.566081   15626 system_pods.go:89] "kube-apiserver-addons-368879" [688a9861-a405-428e-b035-60d971e9c639] Running
	I1126 19:36:24.566091   15626 system_pods.go:89] "kube-controller-manager-addons-368879" [f6dd02aa-10ac-42fc-b22c-e403a4be5178] Running
	I1126 19:36:24.566099   15626 system_pods.go:89] "kube-ingress-dns-minikube" [78401885-fa6d-4da9-8f6b-90e399e33134] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1126 19:36:24.566107   15626 system_pods.go:89] "kube-proxy-jvtzp" [4315ba8f-bd42-4af5-99ad-333bfd119723] Running
	I1126 19:36:24.566113   15626 system_pods.go:89] "kube-scheduler-addons-368879" [53644990-e6e9-4c0b-948c-60ddae17c928] Running
	I1126 19:36:24.566136   15626 system_pods.go:89] "metrics-server-85b7d694d7-mnzc2" [ef15070b-c3d9-4b20-b2b3-e53708b4a9a6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1126 19:36:24.566148   15626 system_pods.go:89] "nvidia-device-plugin-daemonset-jr6zz" [3a661477-c629-4de7-b047-25a87d532f45] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1126 19:36:24.566156   15626 system_pods.go:89] "registry-6b586f9694-4kzdl" [09b5cb7a-fb98-4680-a679-8c1716e1f038] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1126 19:36:24.566169   15626 system_pods.go:89] "registry-creds-764b6fb674-rspjs" [3fc9a944-7747-4626-b4ca-0f2e048703fd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1126 19:36:24.566177   15626 system_pods.go:89] "registry-proxy-lcrcc" [62d90aca-8e64-4089-83fd-bc39957164ba] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1126 19:36:24.566187   15626 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2lfj6" [a495ae59-667b-4e3b-914d-f8f925d11a5d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1126 19:36:24.566195   15626 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4kvzv" [f3968d51-d680-46be-bb2a-7fa01e497530] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1126 19:36:24.566204   15626 system_pods.go:89] "storage-provisioner" [e959d9e7-8cb7-4ca2-963a-7a0c7d416565] Running
	I1126 19:36:24.566215   15626 system_pods.go:126] duration metric: took 1.495973725s to wait for k8s-apps to be running ...
	I1126 19:36:24.566229   15626 system_svc.go:44] waiting for kubelet service to be running ....
	I1126 19:36:24.566274   15626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 19:36:24.582229   15626 system_svc.go:56] duration metric: took 15.994101ms WaitForService to wait for kubelet
	I1126 19:36:24.582253   15626 kubeadm.go:587] duration metric: took 43.056719279s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1126 19:36:24.582272   15626 node_conditions.go:102] verifying NodePressure condition ...
	I1126 19:36:24.584620   15626 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1126 19:36:24.584640   15626 node_conditions.go:123] node cpu capacity is 8
	I1126 19:36:24.584653   15626 node_conditions.go:105] duration metric: took 2.37555ms to run NodePressure ...
	I1126 19:36:24.584668   15626 start.go:242] waiting for startup goroutines ...
	I1126 19:36:24.815784   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:25.015221   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:25.053762   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:25.053790   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:25.316075   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:25.514283   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:25.615141   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:25.615191   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:25.814593   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:26.014582   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:26.054000   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:26.054023   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:26.315693   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:26.514778   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:26.554201   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:26.554330   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:26.814993   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:27.014761   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:27.054741   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:27.055083   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:27.315109   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:27.514172   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:27.553997   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:27.554486   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:27.815740   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:28.014966   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:28.110945   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:28.111277   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:28.314846   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:28.514562   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:28.553539   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:28.553642   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:28.814719   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:29.014674   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:29.054384   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:29.054532   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:29.314797   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:29.515217   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:29.554194   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:29.554283   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:29.814739   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:30.014744   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:30.053864   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:30.053901   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:30.315532   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:30.514389   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:30.553774   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:30.554264   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:30.815046   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:31.013643   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:31.054184   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:31.054409   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:31.315228   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:31.515757   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:31.554078   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:31.554125   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:31.816299   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:32.014298   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:32.115308   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:32.115337   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:32.314529   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:32.514237   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:32.553783   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:32.554289   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:32.815789   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:33.014811   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:33.054411   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:33.054558   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:33.315080   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:33.514413   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:33.553604   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:33.553614   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:33.814913   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:34.014860   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:34.054146   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:34.054320   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:34.314733   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:34.514352   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:34.553344   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:34.553360   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:34.814647   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:35.014609   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:35.053827   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:35.053846   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:35.315897   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:35.515346   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:35.553643   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:35.553650   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:35.815534   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:36.014669   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:36.054214   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:36.054416   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:36.315234   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:36.514322   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:36.553807   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:36.554084   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:36.815667   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:37.014229   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:37.053605   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:37.053668   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:37.315589   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:37.514860   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:37.554197   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:37.554209   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:37.815150   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:38.014198   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:38.053994   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:38.054232   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:38.314909   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:38.515147   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:38.553226   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:38.553981   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:38.815494   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:39.014492   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:39.054189   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:39.054434   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:39.316231   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:39.578145   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:39.578197   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:39.578264   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:39.814561   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:40.014142   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:40.053639   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:40.054095   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:40.314690   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:40.514542   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:40.553413   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:40.553509   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:40.815099   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:41.014084   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:41.053720   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:41.054380   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:41.315058   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:41.513582   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:41.553404   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:41.553424   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:41.815407   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:42.014483   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:42.053646   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:42.053887   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:42.315425   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:42.514195   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:42.553746   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:42.614980   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:42.815235   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:43.014159   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:43.053404   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:43.054418   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:43.315893   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:43.514351   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:43.553390   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:43.554052   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:43.814761   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:44.014903   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:44.054085   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:44.054154   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:44.314532   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:44.514632   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:44.614938   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:44.614976   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:44.815010   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:45.015034   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:45.056898   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:45.057071   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:45.314689   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:45.522101   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:45.560030   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:45.560084   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:45.814776   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:46.015831   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:46.056254   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:46.056819   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:46.315378   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:46.514475   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:46.553955   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:46.554049   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:46.815793   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:47.015044   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:47.053891   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:47.054237   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:47.314836   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:47.514940   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:47.615209   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:47.615241   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:47.815268   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:48.014236   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:48.053741   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:48.054175   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:48.315208   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:48.514002   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:48.553143   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:48.553814   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:48.815227   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:49.013708   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:49.053812   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:49.053838   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:49.315592   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:49.514968   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:49.615834   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:49.615943   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:49.815846   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:50.014674   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:50.053804   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:50.053840   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:50.315187   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:50.514339   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:50.553916   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:50.554504   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:50.815340   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:51.014420   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:51.053955   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:51.054183   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:51.314655   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:51.514506   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:51.615067   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:51.615102   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:51.815943   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:52.013909   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:52.054355   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:52.054428   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:52.314521   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:52.514076   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:52.553638   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:52.554429   15626 kapi.go:107] duration metric: took 1m9.503058212s to wait for kubernetes.io/minikube-addons=registry ...
	I1126 19:36:52.814952   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:53.015553   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:53.054367   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:53.315902   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:53.514837   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:53.554244   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:53.814844   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:54.014992   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:54.053606   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:54.314884   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:54.515414   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:54.554120   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:54.815774   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:55.014588   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:55.053794   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:55.317188   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:55.514386   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:55.553997   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:55.815499   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:56.015503   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:56.053965   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:56.315897   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:56.514691   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:56.553935   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:56.815528   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:57.014349   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:57.053300   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:57.316066   15626 kapi.go:107] duration metric: took 1m7.503916703s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1126 19:36:57.318195   15626 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-368879 cluster.
	I1126 19:36:57.319372   15626 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1126 19:36:57.320737   15626 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1126 19:36:57.515414   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:57.556444   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:58.014806   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:58.054064   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:58.515486   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:58.553717   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:59.014223   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:59.054372   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:59.514872   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:59.554099   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:00.077932   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:00.077959   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:00.515354   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:00.553590   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:01.014946   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:01.054451   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:01.514298   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:01.553335   15626 kapi.go:107] duration metric: took 1m18.502553999s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1126 19:37:02.038107   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:02.515106   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:03.014838   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:03.513660   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:04.014840   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:04.514932   15626 kapi.go:107] duration metric: took 1m21.003818866s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1126 19:37:04.516240   15626 out.go:179] * Enabled addons: registry-creds, ingress-dns, cloud-spanner, storage-provisioner-rancher, nvidia-device-plugin, amd-gpu-device-plugin, storage-provisioner, metrics-server, yakd, inspektor-gadget, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1126 19:37:04.517372   15626 addons.go:530] duration metric: took 1m22.991800608s for enable addons: enabled=[registry-creds ingress-dns cloud-spanner storage-provisioner-rancher nvidia-device-plugin amd-gpu-device-plugin storage-provisioner metrics-server yakd inspektor-gadget volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1126 19:37:04.517412   15626 start.go:247] waiting for cluster config update ...
	I1126 19:37:04.517442   15626 start.go:256] writing updated cluster config ...
	I1126 19:37:04.517745   15626 ssh_runner.go:195] Run: rm -f paused
	I1126 19:37:04.521491   15626 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 19:37:04.525229   15626 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rv6zq" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 19:37:04.529042   15626 pod_ready.go:94] pod "coredns-66bc5c9577-rv6zq" is "Ready"
	I1126 19:37:04.529062   15626 pod_ready.go:86] duration metric: took 3.813945ms for pod "coredns-66bc5c9577-rv6zq" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 19:37:04.530697   15626 pod_ready.go:83] waiting for pod "etcd-addons-368879" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 19:37:04.533655   15626 pod_ready.go:94] pod "etcd-addons-368879" is "Ready"
	I1126 19:37:04.533675   15626 pod_ready.go:86] duration metric: took 2.961786ms for pod "etcd-addons-368879" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 19:37:04.535214   15626 pod_ready.go:83] waiting for pod "kube-apiserver-addons-368879" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 19:37:04.538174   15626 pod_ready.go:94] pod "kube-apiserver-addons-368879" is "Ready"
	I1126 19:37:04.538190   15626 pod_ready.go:86] duration metric: took 2.960721ms for pod "kube-apiserver-addons-368879" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 19:37:04.539680   15626 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-368879" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 19:37:04.925214   15626 pod_ready.go:94] pod "kube-controller-manager-addons-368879" is "Ready"
	I1126 19:37:04.925245   15626 pod_ready.go:86] duration metric: took 385.549536ms for pod "kube-controller-manager-addons-368879" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 19:37:05.125035   15626 pod_ready.go:83] waiting for pod "kube-proxy-jvtzp" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 19:37:05.527857   15626 pod_ready.go:94] pod "kube-proxy-jvtzp" is "Ready"
	I1126 19:37:05.527883   15626 pod_ready.go:86] duration metric: took 402.827458ms for pod "kube-proxy-jvtzp" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 19:37:05.724325   15626 pod_ready.go:83] waiting for pod "kube-scheduler-addons-368879" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 19:37:06.124201   15626 pod_ready.go:94] pod "kube-scheduler-addons-368879" is "Ready"
	I1126 19:37:06.124225   15626 pod_ready.go:86] duration metric: took 399.880093ms for pod "kube-scheduler-addons-368879" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 19:37:06.124240   15626 pod_ready.go:40] duration metric: took 1.602722661s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 19:37:06.167560   15626 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1126 19:37:06.169559   15626 out.go:179] * Done! kubectl is now configured to use "addons-368879" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 26 19:39:39 addons-368879 crio[774]: time="2025-11-26T19:39:39.373452187Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-jj7qx/POD" id=432114bb-b3bf-49f0-87de-805992146067 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 26 19:39:39 addons-368879 crio[774]: time="2025-11-26T19:39:39.373537889Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 19:39:39 addons-368879 crio[774]: time="2025-11-26T19:39:39.380393647Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-jj7qx Namespace:default ID:4c7c7c8551a396836802c88256957499e8134973d7aaffb5fbc2eb57ad37c2d5 UID:652d7a02-9080-4982-8c48-03e2a7bb92fd NetNS:/var/run/netns/4141bf1e-2619-4995-9d63-0fe16b05fd86 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000f14cd0}] Aliases:map[]}"
	Nov 26 19:39:39 addons-368879 crio[774]: time="2025-11-26T19:39:39.380580231Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-jj7qx to CNI network \"kindnet\" (type=ptp)"
	Nov 26 19:39:39 addons-368879 crio[774]: time="2025-11-26T19:39:39.391731327Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-jj7qx Namespace:default ID:4c7c7c8551a396836802c88256957499e8134973d7aaffb5fbc2eb57ad37c2d5 UID:652d7a02-9080-4982-8c48-03e2a7bb92fd NetNS:/var/run/netns/4141bf1e-2619-4995-9d63-0fe16b05fd86 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000f14cd0}] Aliases:map[]}"
	Nov 26 19:39:39 addons-368879 crio[774]: time="2025-11-26T19:39:39.391842315Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-jj7qx for CNI network kindnet (type=ptp)"
	Nov 26 19:39:39 addons-368879 crio[774]: time="2025-11-26T19:39:39.39266925Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 26 19:39:39 addons-368879 crio[774]: time="2025-11-26T19:39:39.393440503Z" level=info msg="Ran pod sandbox 4c7c7c8551a396836802c88256957499e8134973d7aaffb5fbc2eb57ad37c2d5 with infra container: default/hello-world-app-5d498dc89-jj7qx/POD" id=432114bb-b3bf-49f0-87de-805992146067 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 26 19:39:39 addons-368879 crio[774]: time="2025-11-26T19:39:39.394521978Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=51ed1537-beb1-4500-98de-c1bd4007e063 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 19:39:39 addons-368879 crio[774]: time="2025-11-26T19:39:39.394649197Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=51ed1537-beb1-4500-98de-c1bd4007e063 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 19:39:39 addons-368879 crio[774]: time="2025-11-26T19:39:39.394678824Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=51ed1537-beb1-4500-98de-c1bd4007e063 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 19:39:39 addons-368879 crio[774]: time="2025-11-26T19:39:39.395264082Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=c73db70f-27c2-4e68-8a8e-10a5f088cb62 name=/runtime.v1.ImageService/PullImage
	Nov 26 19:39:39 addons-368879 crio[774]: time="2025-11-26T19:39:39.399921251Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Nov 26 19:39:40 addons-368879 crio[774]: time="2025-11-26T19:39:40.20444833Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86" id=c73db70f-27c2-4e68-8a8e-10a5f088cb62 name=/runtime.v1.ImageService/PullImage
	Nov 26 19:39:40 addons-368879 crio[774]: time="2025-11-26T19:39:40.205043225Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=c906ad5c-172c-4e1c-839a-2d9db066d3b2 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 19:39:40 addons-368879 crio[774]: time="2025-11-26T19:39:40.206315701Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=f436fd03-9454-4005-91a3-a76cd62b8725 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 19:39:40 addons-368879 crio[774]: time="2025-11-26T19:39:40.209390032Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-jj7qx/hello-world-app" id=0e40e8cd-72e3-4206-bc87-25298d66d73e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 19:39:40 addons-368879 crio[774]: time="2025-11-26T19:39:40.209524752Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 19:39:40 addons-368879 crio[774]: time="2025-11-26T19:39:40.215491945Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 19:39:40 addons-368879 crio[774]: time="2025-11-26T19:39:40.215704994Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/8a608c8270a43e38d00b4d7f732915a5544303872e36314f5675e3eaef7dddc3/merged/etc/passwd: no such file or directory"
	Nov 26 19:39:40 addons-368879 crio[774]: time="2025-11-26T19:39:40.215743203Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/8a608c8270a43e38d00b4d7f732915a5544303872e36314f5675e3eaef7dddc3/merged/etc/group: no such file or directory"
	Nov 26 19:39:40 addons-368879 crio[774]: time="2025-11-26T19:39:40.216030597Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 19:39:40 addons-368879 crio[774]: time="2025-11-26T19:39:40.241482883Z" level=info msg="Created container 37dd4cce0e402a8bc5191389edea81ede6f64661bd65dc26ddf04b433e353a93: default/hello-world-app-5d498dc89-jj7qx/hello-world-app" id=0e40e8cd-72e3-4206-bc87-25298d66d73e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 19:39:40 addons-368879 crio[774]: time="2025-11-26T19:39:40.242021865Z" level=info msg="Starting container: 37dd4cce0e402a8bc5191389edea81ede6f64661bd65dc26ddf04b433e353a93" id=e1f7b0d8-bb30-4223-a458-4d295e2b9012 name=/runtime.v1.RuntimeService/StartContainer
	Nov 26 19:39:40 addons-368879 crio[774]: time="2025-11-26T19:39:40.243946185Z" level=info msg="Started container" PID=9462 containerID=37dd4cce0e402a8bc5191389edea81ede6f64661bd65dc26ddf04b433e353a93 description=default/hello-world-app-5d498dc89-jj7qx/hello-world-app id=e1f7b0d8-bb30-4223-a458-4d295e2b9012 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4c7c7c8551a396836802c88256957499e8134973d7aaffb5fbc2eb57ad37c2d5
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	37dd4cce0e402       docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86                                        Less than a second ago   Running             hello-world-app                          0                   4c7c7c8551a39       hello-world-app-5d498dc89-jj7qx            default
	a63c63672d5b2       docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605                             About a minute ago       Running             registry-creds                           0                   6ac5ce9947fdb       registry-creds-764b6fb674-rspjs            kube-system
	2cc41bd2b9c50       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                                              2 minutes ago            Running             nginx                                    0                   0f59abf6693cf       nginx                                      default
	564d1ab3e2c23       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          2 minutes ago            Running             busybox                                  0                   25e6f87e1f138       busybox                                    default
	cff11b2555930       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          2 minutes ago            Running             csi-snapshotter                          0                   476bf7ba4fae0       csi-hostpathplugin-4cdfn                   kube-system
	212fd16c128eb       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          2 minutes ago            Running             csi-provisioner                          0                   476bf7ba4fae0       csi-hostpathplugin-4cdfn                   kube-system
	9ad67d88c52b2       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            2 minutes ago            Running             liveness-probe                           0                   476bf7ba4fae0       csi-hostpathplugin-4cdfn                   kube-system
	05f619672cbe3       registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27                             2 minutes ago            Running             controller                               0                   09a8cfd5a1366       ingress-nginx-controller-6c8bf45fb-f6sg8   ingress-nginx
	73beca01f1a18       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 2 minutes ago            Running             gcp-auth                                 0                   fb1126009b9a6       gcp-auth-78565c9fb4-277vt                  gcp-auth
	064c32b317de7       884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45                                                                             2 minutes ago            Exited              patch                                    2                   2bc3d288cb5f8       ingress-nginx-admission-patch-8mvpf        ingress-nginx
	36ba93a3abe19       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           2 minutes ago            Running             hostpath                                 0                   476bf7ba4fae0       csi-hostpathplugin-4cdfn                   kube-system
	d9b60fb8e7242       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:9a12b3c1d155bb081ff408a9b6c1cec18573c967e0c3917225b81ffe11c0b7f2                            2 minutes ago            Running             gadget                                   0                   c4c915d612293       gadget-rmxlg                               gadget
	4197c12bab9b3       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                2 minutes ago            Running             node-driver-registrar                    0                   476bf7ba4fae0       csi-hostpathplugin-4cdfn                   kube-system
	714962454fedb       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              2 minutes ago            Running             registry-proxy                           0                   dd1eb951e9ba6       registry-proxy-lcrcc                       kube-system
	603eac3a5db35       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     2 minutes ago            Running             amd-gpu-device-plugin                    0                   22e90389a3e73       amd-gpu-device-plugin-gj5pg                kube-system
	d97dad65e0c22       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             2 minutes ago            Running             csi-attacher                             0                   b02c8daa86aab       csi-hostpath-attacher-0                    kube-system
	e825b2f37651c       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     2 minutes ago            Running             nvidia-device-plugin-ctr                 0                   49d93fe45ce5c       nvidia-device-plugin-daemonset-jr6zz       kube-system
	7dfc385a20d46       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      2 minutes ago            Running             volume-snapshot-controller               0                   d2d21922bcfe7       snapshot-controller-7d9fbc56b8-4kvzv       kube-system
	47283ac77595b       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      2 minutes ago            Running             volume-snapshot-controller               0                   55b337aac826c       snapshot-controller-7d9fbc56b8-2lfj6       kube-system
	5a22bb7b95033       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   2 minutes ago            Running             csi-external-health-monitor-controller   0                   476bf7ba4fae0       csi-hostpathplugin-4cdfn                   kube-system
	4001803d68503       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   2 minutes ago            Exited              create                                   0                   03ccd7fc8f711       ingress-nginx-admission-create-tbk6s       ingress-nginx
	efa536fb3d778       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              2 minutes ago            Running             csi-resizer                              0                   7a5905c54d8fe       csi-hostpath-resizer-0                     kube-system
	65470a1503151       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           2 minutes ago            Running             registry                                 0                   e298b73fdbb9f       registry-6b586f9694-4kzdl                  kube-system
	eb50e6ab9debf       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               3 minutes ago            Running             minikube-ingress-dns                     0                   5ed74000c985f       kube-ingress-dns-minikube                  kube-system
	1eaf62f13549a       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              3 minutes ago            Running             yakd                                     0                   373ebc59fb5fc       yakd-dashboard-5ff678cb9-zkdvc             yakd-dashboard
	2cf89df41d649       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             3 minutes ago            Running             local-path-provisioner                   0                   e54e661b8694f       local-path-provisioner-648f6765c9-4lngh    local-path-storage
	719640c6c4cf6       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        3 minutes ago            Running             metrics-server                           0                   d8b84982c82d2       metrics-server-85b7d694d7-mnzc2            kube-system
	93006bab5753d       gcr.io/cloud-spanner-emulator/emulator@sha256:22a4d5b0f97bd0c2ee20da342493c5a60e40b4d62ec20c174cb32ff4ee1f65bf                               3 minutes ago            Running             cloud-spanner-emulator                   0                   ed79da8d73539       cloud-spanner-emulator-5bdddb765-wqlhm     default
	d64fe5dcd9941       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             3 minutes ago            Running             coredns                                  0                   6cdc9a81ff136       coredns-66bc5c9577-rv6zq                   kube-system
	25e48df5dfb4b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             3 minutes ago            Running             storage-provisioner                      0                   1a532dabf38f0       storage-provisioner                        kube-system
	59ceeea3b62d8       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             3 minutes ago            Running             kube-proxy                               0                   6b827d89c503d       kube-proxy-jvtzp                           kube-system
	c71770537fdbd       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             3 minutes ago            Running             kindnet-cni                              0                   bbfa93b5771e3       kindnet-dqhsm                              kube-system
	6d9b40c465aff       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             4 minutes ago            Running             kube-apiserver                           0                   8023c2e989f33       kube-apiserver-addons-368879               kube-system
	00f8c7ca3495a       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             4 minutes ago            Running             kube-scheduler                           0                   c6c6b7410e26b       kube-scheduler-addons-368879               kube-system
	f7ace30aee7af       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             4 minutes ago            Running             etcd                                     0                   b9422d1b1b1d5       etcd-addons-368879                         kube-system
	beecb43fac96b       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             4 minutes ago            Running             kube-controller-manager                  0                   f2bffafd455a4       kube-controller-manager-addons-368879      kube-system
	
	
	==> coredns [d64fe5dcd9941dd86a15e1a24ecd4bcd3f1c12d60fa89d08729cb02ae1f3a859] <==
	[INFO] 10.244.0.22:46386 - 22532 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000172146s
	[INFO] 10.244.0.22:60922 - 59753 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.004420742s
	[INFO] 10.244.0.22:40374 - 14087 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.006018846s
	[INFO] 10.244.0.22:46867 - 11971 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005200066s
	[INFO] 10.244.0.22:49741 - 61297 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005543734s
	[INFO] 10.244.0.22:51466 - 51251 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005132371s
	[INFO] 10.244.0.22:51594 - 34280 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005729549s
	[INFO] 10.244.0.22:59217 - 3605 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001632058s
	[INFO] 10.244.0.22:42526 - 38586 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002172618s
	[INFO] 10.244.0.25:38341 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000207825s
	[INFO] 10.244.0.25:57510 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00013936s
	[INFO] 10.244.0.31:48226 - 60817 "AAAA IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000220397s
	[INFO] 10.244.0.31:56787 - 22373 "A IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000293484s
	[INFO] 10.244.0.31:33049 - 43699 "A IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000122572s
	[INFO] 10.244.0.31:49026 - 29001 "AAAA IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000156924s
	[INFO] 10.244.0.31:35676 - 59639 "A IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000094697s
	[INFO] 10.244.0.31:34137 - 52964 "AAAA IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000118858s
	[INFO] 10.244.0.31:48085 - 59623 "AAAA IN accounts.google.com.us-central1-a.c.k8s-minikube.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 185 0.005451174s
	[INFO] 10.244.0.31:40351 - 29359 "A IN accounts.google.com.us-central1-a.c.k8s-minikube.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 185 0.006287995s
	[INFO] 10.244.0.31:45730 - 18172 "A IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.004304638s
	[INFO] 10.244.0.31:35070 - 13512 "AAAA IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.004790481s
	[INFO] 10.244.0.31:50256 - 23712 "A IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.004273349s
	[INFO] 10.244.0.31:55888 - 18997 "AAAA IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.004657163s
	[INFO] 10.244.0.31:38325 - 9233 "A IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 72 0.001501031s
	[INFO] 10.244.0.31:54823 - 22361 "AAAA IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 84 0.001737357s
	
	
	==> describe nodes <==
	Name:               addons-368879
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-368879
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970
	                    minikube.k8s.io/name=addons-368879
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_26T19_35_36_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-368879
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-368879"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 26 Nov 2025 19:35:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-368879
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 26 Nov 2025 19:39:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 26 Nov 2025 19:38:39 +0000   Wed, 26 Nov 2025 19:35:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 26 Nov 2025 19:38:39 +0000   Wed, 26 Nov 2025 19:35:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 26 Nov 2025 19:38:39 +0000   Wed, 26 Nov 2025 19:35:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 26 Nov 2025 19:38:39 +0000   Wed, 26 Nov 2025 19:36:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-368879
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                3b9ff54c-dae7-424b-a157-0391af8d1944
	  Boot ID:                    4ab83601-3d95-4490-96bc-14b416ba2714
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m34s
	  default                     cloud-spanner-emulator-5bdddb765-wqlhm      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m58s
	  default                     hello-world-app-5d498dc89-jj7qx             0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  gadget                      gadget-rmxlg                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m58s
	  gcp-auth                    gcp-auth-78565c9fb4-277vt                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m51s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-f6sg8    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         3m57s
	  kube-system                 amd-gpu-device-plugin-gj5pg                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m18s
	  kube-system                 coredns-66bc5c9577-rv6zq                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     3m59s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m57s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m57s
	  kube-system                 csi-hostpathplugin-4cdfn                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m18s
	  kube-system                 etcd-addons-368879                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m5s
	  kube-system                 kindnet-dqhsm                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      3m59s
	  kube-system                 kube-apiserver-addons-368879                250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m6s
	  kube-system                 kube-controller-manager-addons-368879       200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m5s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m58s
	  kube-system                 kube-proxy-jvtzp                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m59s
	  kube-system                 kube-scheduler-addons-368879                100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m5s
	  kube-system                 metrics-server-85b7d694d7-mnzc2             100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         3m58s
	  kube-system                 nvidia-device-plugin-daemonset-jr6zz        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m18s
	  kube-system                 registry-6b586f9694-4kzdl                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m58s
	  kube-system                 registry-creds-764b6fb674-rspjs             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m58s
	  kube-system                 registry-proxy-lcrcc                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m18s
	  kube-system                 snapshot-controller-7d9fbc56b8-2lfj6        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m57s
	  kube-system                 snapshot-controller-7d9fbc56b8-4kvzv        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m57s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m58s
	  local-path-storage          local-path-provisioner-648f6765c9-4lngh     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m58s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-zkdvc              0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     3m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 3m57s  kube-proxy       
	  Normal  Starting                 4m5s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m5s   kubelet          Node addons-368879 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m5s   kubelet          Node addons-368879 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m5s   kubelet          Node addons-368879 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m     node-controller  Node addons-368879 event: Registered Node addons-368879 in Controller
	  Normal  NodeReady                3m18s  kubelet          Node addons-368879 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.091832] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023778] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.010162] kauditd_printk_skb: 47 callbacks suppressed
	[Nov26 19:37] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.011177] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.022896] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.024869] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.022915] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.023864] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +2.047793] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +4.032568] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +8.126198] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[ +16.382331] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[Nov26 19:38] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	
	
	==> etcd [f7ace30aee7afbfe85c6ffe68e9b951559babdf0fc3cfb0b08e0ecaba1352160] <==
	{"level":"warn","ts":"2025-11-26T19:35:33.048946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:35:33.055129Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:35:33.064524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:35:33.073541Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:35:33.079777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:35:33.085423Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:35:33.091890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:35:33.097780Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:35:33.105442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:35:33.111988Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:35:33.117821Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:35:33.124290Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:35:33.130013Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:35:33.138602Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:35:33.144424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:35:33.163885Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:35:33.169585Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:35:33.175323Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:35:33.227391Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:35:44.037543Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:35:44.043691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:36:10.608952Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:36:10.629871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:36:10.635762Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54812","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-26T19:37:34.847777Z","caller":"traceutil/trace.go:172","msg":"trace[1777796286] transaction","detail":"{read_only:false; response_revision:1373; number_of_response:1; }","duration":"122.98725ms","start":"2025-11-26T19:37:34.724763Z","end":"2025-11-26T19:37:34.847750Z","steps":["trace[1777796286] 'process raft request'  (duration: 47.010472ms)","trace[1777796286] 'compare'  (duration: 75.895775ms)"],"step_count":2}
	
	
	==> gcp-auth [73beca01f1a189371bd5d5cacdcac2a900e8b76f1a10e8424c5b84daf53c68f4] <==
	2025/11/26 19:36:56 GCP Auth Webhook started!
	2025/11/26 19:37:06 Ready to marshal response ...
	2025/11/26 19:37:06 Ready to write response ...
	2025/11/26 19:37:06 Ready to marshal response ...
	2025/11/26 19:37:06 Ready to write response ...
	2025/11/26 19:37:06 Ready to marshal response ...
	2025/11/26 19:37:06 Ready to write response ...
	2025/11/26 19:37:15 Ready to marshal response ...
	2025/11/26 19:37:15 Ready to write response ...
	2025/11/26 19:37:25 Ready to marshal response ...
	2025/11/26 19:37:25 Ready to write response ...
	2025/11/26 19:37:26 Ready to marshal response ...
	2025/11/26 19:37:26 Ready to write response ...
	2025/11/26 19:37:26 Ready to marshal response ...
	2025/11/26 19:37:26 Ready to write response ...
	2025/11/26 19:37:35 Ready to marshal response ...
	2025/11/26 19:37:35 Ready to write response ...
	2025/11/26 19:37:35 Ready to marshal response ...
	2025/11/26 19:37:35 Ready to write response ...
	2025/11/26 19:38:06 Ready to marshal response ...
	2025/11/26 19:38:06 Ready to write response ...
	2025/11/26 19:39:39 Ready to marshal response ...
	2025/11/26 19:39:39 Ready to write response ...
	
	
	==> kernel <==
	 19:39:40 up 22 min,  0 user,  load average: 0.36, 0.65, 0.32
	Linux addons-368879 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c71770537fdbd023344c0899f0ec3b332c10bdffa104f53bcb64a05e3456fdff] <==
	I1126 19:37:32.686392       1 main.go:301] handling current node
	I1126 19:37:42.687647       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 19:37:42.687680       1 main.go:301] handling current node
	I1126 19:37:52.689224       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 19:37:52.689266       1 main.go:301] handling current node
	I1126 19:38:02.688753       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 19:38:02.688781       1 main.go:301] handling current node
	I1126 19:38:12.688733       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 19:38:12.688776       1 main.go:301] handling current node
	I1126 19:38:22.686725       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 19:38:22.686756       1 main.go:301] handling current node
	I1126 19:38:32.693966       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 19:38:32.693995       1 main.go:301] handling current node
	I1126 19:38:42.687376       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 19:38:42.687412       1 main.go:301] handling current node
	I1126 19:38:52.686523       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 19:38:52.686550       1 main.go:301] handling current node
	I1126 19:39:02.693106       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 19:39:02.693141       1 main.go:301] handling current node
	I1126 19:39:12.687701       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 19:39:12.687728       1 main.go:301] handling current node
	I1126 19:39:22.686571       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 19:39:22.686615       1 main.go:301] handling current node
	I1126 19:39:32.694598       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 19:39:32.694625       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6d9b40c465aff763d4d0644216d13f8019e5ff223b6cd4d430832c2712521310] <==
	W1126 19:36:10.629847       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1126 19:36:10.635759       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1126 19:36:22.918825       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.218.116:443: connect: connection refused
	E1126 19:36:22.918868       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.218.116:443: connect: connection refused" logger="UnhandledError"
	W1126 19:36:22.918951       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.218.116:443: connect: connection refused
	E1126 19:36:22.918983       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.218.116:443: connect: connection refused" logger="UnhandledError"
	W1126 19:36:22.937144       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.218.116:443: connect: connection refused
	E1126 19:36:22.937183       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.218.116:443: connect: connection refused" logger="UnhandledError"
	W1126 19:36:22.947348       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.218.116:443: connect: connection refused
	E1126 19:36:22.947391       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.218.116:443: connect: connection refused" logger="UnhandledError"
	W1126 19:36:28.130713       1 handler_proxy.go:99] no RequestInfo found in the context
	E1126 19:36:28.130785       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1126 19:36:28.130812       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.61.54:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.61.54:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.61.54:443: connect: connection refused" logger="UnhandledError"
	E1126 19:36:28.132889       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.61.54:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.61.54:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.61.54:443: connect: connection refused" logger="UnhandledError"
	E1126 19:36:28.138748       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.61.54:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.61.54:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.61.54:443: connect: connection refused" logger="UnhandledError"
	I1126 19:36:28.182542       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1126 19:37:14.836570       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:39742: use of closed network connection
	E1126 19:37:14.980403       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:39766: use of closed network connection
	I1126 19:37:15.468362       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1126 19:37:15.648628       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.138.27"}
	I1126 19:37:45.733934       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1126 19:39:39.132098       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.101.150.246"}
	
	
	==> kube-controller-manager [beecb43fac96b0dc52fe3fc4694c61a44626abc7a7d215655c08387e375b3e65] <==
	I1126 19:35:40.591943       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1126 19:35:40.592163       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1126 19:35:40.592180       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1126 19:35:40.592265       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1126 19:35:40.592935       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1126 19:35:40.592959       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1126 19:35:40.592978       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1126 19:35:40.592994       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1126 19:35:40.593048       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1126 19:35:40.593471       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1126 19:35:40.596056       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1126 19:35:40.597255       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1126 19:35:40.598410       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1126 19:35:40.600633       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1126 19:35:40.604864       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1126 19:35:40.615151       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1126 19:35:42.740667       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1126 19:36:10.602193       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1126 19:36:10.602320       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1126 19:36:10.602352       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1126 19:36:10.621994       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1126 19:36:10.625048       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1126 19:36:10.702441       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1126 19:36:10.725622       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1126 19:36:25.547699       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [59ceeea3b62d8764e5784e2b3a3de748607f552e64062869f959f25d6d89b19c] <==
	I1126 19:35:42.481846       1 server_linux.go:53] "Using iptables proxy"
	I1126 19:35:42.722541       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1126 19:35:42.829102       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1126 19:35:42.829136       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1126 19:35:42.829227       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1126 19:35:42.855108       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1126 19:35:42.855229       1 server_linux.go:132] "Using iptables Proxier"
	I1126 19:35:42.862509       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1126 19:35:42.869749       1 server.go:527] "Version info" version="v1.34.1"
	I1126 19:35:42.869810       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 19:35:42.873117       1 config.go:106] "Starting endpoint slice config controller"
	I1126 19:35:42.873147       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1126 19:35:42.873186       1 config.go:200] "Starting service config controller"
	I1126 19:35:42.873192       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1126 19:35:42.873347       1 config.go:403] "Starting serviceCIDR config controller"
	I1126 19:35:42.873364       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1126 19:35:42.873563       1 config.go:309] "Starting node config controller"
	I1126 19:35:42.873584       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1126 19:35:42.873592       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1126 19:35:42.974169       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1126 19:35:42.974206       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1126 19:35:42.974180       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [00f8c7ca3495a11998b111688472e6da4d0e62908b4fe1c119e083354e98f6e7] <==
	E1126 19:35:33.612940       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1126 19:35:33.612962       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1126 19:35:33.613028       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1126 19:35:33.613055       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1126 19:35:33.613217       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1126 19:35:33.616223       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1126 19:35:33.616253       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1126 19:35:33.616225       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1126 19:35:33.616356       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1126 19:35:33.616369       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1126 19:35:33.616377       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1126 19:35:33.616435       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1126 19:35:33.616514       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1126 19:35:33.616516       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1126 19:35:34.491405       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1126 19:35:34.519637       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1126 19:35:34.552530       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1126 19:35:34.620064       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1126 19:35:34.659688       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1126 19:35:34.701773       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1126 19:35:34.712886       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1126 19:35:34.753889       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1126 19:35:34.769786       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1126 19:35:34.785615       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1126 19:35:35.010803       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 26 19:38:09 addons-368879 kubelet[1272]: I1126 19:38:09.890248    1272 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-lcrcc" secret="" err="secret \"gcp-auth\" not found"
	Nov 26 19:38:11 addons-368879 kubelet[1272]: I1126 19:38:11.890632    1272 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-jr6zz" secret="" err="secret \"gcp-auth\" not found"
	Nov 26 19:38:14 addons-368879 kubelet[1272]: I1126 19:38:14.095776    1272 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/28e1cec0-f222-4dc1-9c70-d6a1889a2ca3-gcp-creds\") pod \"28e1cec0-f222-4dc1-9c70-d6a1889a2ca3\" (UID: \"28e1cec0-f222-4dc1-9c70-d6a1889a2ca3\") "
	Nov 26 19:38:14 addons-368879 kubelet[1272]: I1126 19:38:14.095873    1272 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^6e17d391-caff-11f0-93a7-2e04ac8e0502\") pod \"28e1cec0-f222-4dc1-9c70-d6a1889a2ca3\" (UID: \"28e1cec0-f222-4dc1-9c70-d6a1889a2ca3\") "
	Nov 26 19:38:14 addons-368879 kubelet[1272]: I1126 19:38:14.095906    1272 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/28e1cec0-f222-4dc1-9c70-d6a1889a2ca3-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "28e1cec0-f222-4dc1-9c70-d6a1889a2ca3" (UID: "28e1cec0-f222-4dc1-9c70-d6a1889a2ca3"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 26 19:38:14 addons-368879 kubelet[1272]: I1126 19:38:14.095922    1272 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hx5zp\" (UniqueName: \"kubernetes.io/projected/28e1cec0-f222-4dc1-9c70-d6a1889a2ca3-kube-api-access-hx5zp\") pod \"28e1cec0-f222-4dc1-9c70-d6a1889a2ca3\" (UID: \"28e1cec0-f222-4dc1-9c70-d6a1889a2ca3\") "
	Nov 26 19:38:14 addons-368879 kubelet[1272]: I1126 19:38:14.096144    1272 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/28e1cec0-f222-4dc1-9c70-d6a1889a2ca3-gcp-creds\") on node \"addons-368879\" DevicePath \"\""
	Nov 26 19:38:14 addons-368879 kubelet[1272]: I1126 19:38:14.098360    1272 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28e1cec0-f222-4dc1-9c70-d6a1889a2ca3-kube-api-access-hx5zp" (OuterVolumeSpecName: "kube-api-access-hx5zp") pod "28e1cec0-f222-4dc1-9c70-d6a1889a2ca3" (UID: "28e1cec0-f222-4dc1-9c70-d6a1889a2ca3"). InnerVolumeSpecName "kube-api-access-hx5zp". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 26 19:38:14 addons-368879 kubelet[1272]: I1126 19:38:14.098581    1272 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^6e17d391-caff-11f0-93a7-2e04ac8e0502" (OuterVolumeSpecName: "task-pv-storage") pod "28e1cec0-f222-4dc1-9c70-d6a1889a2ca3" (UID: "28e1cec0-f222-4dc1-9c70-d6a1889a2ca3"). InnerVolumeSpecName "pvc-358c5761-46eb-43c8-adf6-4c4ddfc81f00". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Nov 26 19:38:14 addons-368879 kubelet[1272]: I1126 19:38:14.196962    1272 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-358c5761-46eb-43c8-adf6-4c4ddfc81f00\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^6e17d391-caff-11f0-93a7-2e04ac8e0502\") on node \"addons-368879\" "
	Nov 26 19:38:14 addons-368879 kubelet[1272]: I1126 19:38:14.196993    1272 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hx5zp\" (UniqueName: \"kubernetes.io/projected/28e1cec0-f222-4dc1-9c70-d6a1889a2ca3-kube-api-access-hx5zp\") on node \"addons-368879\" DevicePath \"\""
	Nov 26 19:38:14 addons-368879 kubelet[1272]: I1126 19:38:14.200847    1272 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-358c5761-46eb-43c8-adf6-4c4ddfc81f00" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^6e17d391-caff-11f0-93a7-2e04ac8e0502") on node "addons-368879"
	Nov 26 19:38:14 addons-368879 kubelet[1272]: I1126 19:38:14.297357    1272 reconciler_common.go:299] "Volume detached for volume \"pvc-358c5761-46eb-43c8-adf6-4c4ddfc81f00\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^6e17d391-caff-11f0-93a7-2e04ac8e0502\") on node \"addons-368879\" DevicePath \"\""
	Nov 26 19:38:14 addons-368879 kubelet[1272]: I1126 19:38:14.510596    1272 scope.go:117] "RemoveContainer" containerID="162fdfa495ec5685d93f6d2c8c8ace550fcd72c2f755801795cb265da5155469"
	Nov 26 19:38:14 addons-368879 kubelet[1272]: I1126 19:38:14.519657    1272 scope.go:117] "RemoveContainer" containerID="162fdfa495ec5685d93f6d2c8c8ace550fcd72c2f755801795cb265da5155469"
	Nov 26 19:38:14 addons-368879 kubelet[1272]: E1126 19:38:14.520350    1272 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"162fdfa495ec5685d93f6d2c8c8ace550fcd72c2f755801795cb265da5155469\": container with ID starting with 162fdfa495ec5685d93f6d2c8c8ace550fcd72c2f755801795cb265da5155469 not found: ID does not exist" containerID="162fdfa495ec5685d93f6d2c8c8ace550fcd72c2f755801795cb265da5155469"
	Nov 26 19:38:14 addons-368879 kubelet[1272]: I1126 19:38:14.520390    1272 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"162fdfa495ec5685d93f6d2c8c8ace550fcd72c2f755801795cb265da5155469"} err="failed to get container status \"162fdfa495ec5685d93f6d2c8c8ace550fcd72c2f755801795cb265da5155469\": rpc error: code = NotFound desc = could not find container \"162fdfa495ec5685d93f6d2c8c8ace550fcd72c2f755801795cb265da5155469\": container with ID starting with 162fdfa495ec5685d93f6d2c8c8ace550fcd72c2f755801795cb265da5155469 not found: ID does not exist"
	Nov 26 19:38:15 addons-368879 kubelet[1272]: I1126 19:38:15.892255    1272 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28e1cec0-f222-4dc1-9c70-d6a1889a2ca3" path="/var/lib/kubelet/pods/28e1cec0-f222-4dc1-9c70-d6a1889a2ca3/volumes"
	Nov 26 19:38:25 addons-368879 kubelet[1272]: E1126 19:38:25.921728    1272 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[gcr-creds], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="kube-system/registry-creds-764b6fb674-rspjs" podUID="3fc9a944-7747-4626-b4ca-0f2e048703fd"
	Nov 26 19:38:38 addons-368879 kubelet[1272]: I1126 19:38:38.608023    1272 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-creds-764b6fb674-rspjs" podStartSLOduration=175.383556542 podStartE2EDuration="2m56.608004171s" podCreationTimestamp="2025-11-26 19:35:42 +0000 UTC" firstStartedPulling="2025-11-26 19:38:36.912725752 +0000 UTC m=+181.098065388" lastFinishedPulling="2025-11-26 19:38:38.137173392 +0000 UTC m=+182.322513017" observedRunningTime="2025-11-26 19:38:38.606401215 +0000 UTC m=+182.791740853" watchObservedRunningTime="2025-11-26 19:38:38.608004171 +0000 UTC m=+182.793343826"
	Nov 26 19:39:18 addons-368879 kubelet[1272]: I1126 19:39:18.890557    1272 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-lcrcc" secret="" err="secret \"gcp-auth\" not found"
	Nov 26 19:39:21 addons-368879 kubelet[1272]: I1126 19:39:21.890511    1272 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-gj5pg" secret="" err="secret \"gcp-auth\" not found"
	Nov 26 19:39:28 addons-368879 kubelet[1272]: I1126 19:39:28.890540    1272 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-jr6zz" secret="" err="secret \"gcp-auth\" not found"
	Nov 26 19:39:39 addons-368879 kubelet[1272]: I1126 19:39:39.173581    1272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7vz2\" (UniqueName: \"kubernetes.io/projected/652d7a02-9080-4982-8c48-03e2a7bb92fd-kube-api-access-b7vz2\") pod \"hello-world-app-5d498dc89-jj7qx\" (UID: \"652d7a02-9080-4982-8c48-03e2a7bb92fd\") " pod="default/hello-world-app-5d498dc89-jj7qx"
	Nov 26 19:39:39 addons-368879 kubelet[1272]: I1126 19:39:39.173637    1272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/652d7a02-9080-4982-8c48-03e2a7bb92fd-gcp-creds\") pod \"hello-world-app-5d498dc89-jj7qx\" (UID: \"652d7a02-9080-4982-8c48-03e2a7bb92fd\") " pod="default/hello-world-app-5d498dc89-jj7qx"
	
	
	==> storage-provisioner [25e48df5dfb4bded98bc722a69d41cfb23ba0789fcd742e1bbb988ed73575038] <==
	W1126 19:39:16.040576       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:39:18.042877       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:39:18.047037       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:39:20.050077       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:39:20.054186       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:39:22.056743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:39:22.060694       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:39:24.063674       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:39:24.067070       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:39:26.070090       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:39:26.074495       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:39:28.076991       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:39:28.080256       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:39:30.082549       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:39:30.085864       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:39:32.088104       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:39:32.091320       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:39:34.094148       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:39:34.098576       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:39:36.101170       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:39:36.104506       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:39:38.107680       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:39:38.112191       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:39:40.115034       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:39:40.118791       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-368879 -n addons-368879
helpers_test.go:269: (dbg) Run:  kubectl --context addons-368879 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-tbk6s ingress-nginx-admission-patch-8mvpf
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-368879 describe pod ingress-nginx-admission-create-tbk6s ingress-nginx-admission-patch-8mvpf
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-368879 describe pod ingress-nginx-admission-create-tbk6s ingress-nginx-admission-patch-8mvpf: exit status 1 (54.104363ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-tbk6s" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-8mvpf" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-368879 describe pod ingress-nginx-admission-create-tbk6s ingress-nginx-admission-patch-8mvpf: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-368879 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-368879 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (233.857074ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1126 19:39:41.426203   29753 out.go:360] Setting OutFile to fd 1 ...
	I1126 19:39:41.426493   29753 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:39:41.426504   29753 out.go:374] Setting ErrFile to fd 2...
	I1126 19:39:41.426510   29753 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:39:41.426695   29753 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-10722/.minikube/bin
	I1126 19:39:41.426928   29753 mustload.go:66] Loading cluster: addons-368879
	I1126 19:39:41.428194   29753 config.go:182] Loaded profile config "addons-368879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:39:41.428223   29753 addons.go:622] checking whether the cluster is paused
	I1126 19:39:41.428344   29753 config.go:182] Loaded profile config "addons-368879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:39:41.428370   29753 host.go:66] Checking if "addons-368879" exists ...
	I1126 19:39:41.428732   29753 cli_runner.go:164] Run: docker container inspect addons-368879 --format={{.State.Status}}
	I1126 19:39:41.445913   29753 ssh_runner.go:195] Run: systemctl --version
	I1126 19:39:41.445961   29753 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-368879
	I1126 19:39:41.462677   29753 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/addons-368879/id_rsa Username:docker}
	I1126 19:39:41.558433   29753 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 19:39:41.558536   29753 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 19:39:41.585837   29753 cri.go:89] found id: "a63c63672d5b267b4f7a559029a7b378798ceab459016dfe8c5b5d17c2f1bf4a"
	I1126 19:39:41.585854   29753 cri.go:89] found id: "cff11b25559308e517bef61e7e64b2cf1d052d365ba751329767d36a353b778f"
	I1126 19:39:41.585858   29753 cri.go:89] found id: "212fd16c128eb496d22ee7050fd12a081b49ab88a5da4ad7ba0218f5c3a6afda"
	I1126 19:39:41.585861   29753 cri.go:89] found id: "9ad67d88c52b2a7f9372f1150442ba3a50bb8cf2680525e52a8619e4b2c8976b"
	I1126 19:39:41.585864   29753 cri.go:89] found id: "36ba93a3abe1946fafeb5c3585ccba2c0309402b73aa6b93fad1679ab9b8230c"
	I1126 19:39:41.585867   29753 cri.go:89] found id: "4197c12bab9b39cb37fcf577883d45acb6708a76bacaf452b81bbbac83ebc32d"
	I1126 19:39:41.585870   29753 cri.go:89] found id: "714962454fedbffb52b88389444757a4051639c470d89a7391a7d4f8507cd1ef"
	I1126 19:39:41.585872   29753 cri.go:89] found id: "603eac3a5db355f206ba3dc4ff6ed41cd4d2d57dd344b753ba2b220aea0ea935"
	I1126 19:39:41.585876   29753 cri.go:89] found id: "d97dad65e0c228f566bf65943a4547da6e571b1a6529c74a8ae4fca908aff071"
	I1126 19:39:41.585884   29753 cri.go:89] found id: "e825b2f37651c5a80788a75b90cfff51ac5bb5898b783c64714a77a867289740"
	I1126 19:39:41.585887   29753 cri.go:89] found id: "7dfc385a20d465a35afa574dc1af5a49eb3c8c1fa7ed51ee776a301a59c2361d"
	I1126 19:39:41.585890   29753 cri.go:89] found id: "47283ac77595bb9d7a2a45c36475aa798bdd174eb0fbf75c0f29ca447edce13c"
	I1126 19:39:41.585893   29753 cri.go:89] found id: "5a22bb7b95033629d4ca4f1e2bcfe31205813495c016ac6b92a54b3988f5aeb1"
	I1126 19:39:41.585896   29753 cri.go:89] found id: "efa536fb3d778fa3f66c962d9c414995c88acf53c34702183c66e5f69d36811f"
	I1126 19:39:41.585899   29753 cri.go:89] found id: "65470a1503151ad0a4d04f74207c75c03e574621ecbe88aa80a882447651e9dd"
	I1126 19:39:41.585909   29753 cri.go:89] found id: "eb50e6ab9debfa0a5504acf11c706fb150d3db77f0f345b4ad749fda6185bc6e"
	I1126 19:39:41.585915   29753 cri.go:89] found id: "719640c6c4cf6714ce19314de9295329020ce388e99867709ed34e6066f0a048"
	I1126 19:39:41.585919   29753 cri.go:89] found id: "d64fe5dcd9941dd86a15e1a24ecd4bcd3f1c12d60fa89d08729cb02ae1f3a859"
	I1126 19:39:41.585922   29753 cri.go:89] found id: "25e48df5dfb4bded98bc722a69d41cfb23ba0789fcd742e1bbb988ed73575038"
	I1126 19:39:41.585925   29753 cri.go:89] found id: "59ceeea3b62d8764e5784e2b3a3de748607f552e64062869f959f25d6d89b19c"
	I1126 19:39:41.585927   29753 cri.go:89] found id: "c71770537fdbd023344c0899f0ec3b332c10bdffa104f53bcb64a05e3456fdff"
	I1126 19:39:41.585930   29753 cri.go:89] found id: "6d9b40c465aff763d4d0644216d13f8019e5ff223b6cd4d430832c2712521310"
	I1126 19:39:41.585934   29753 cri.go:89] found id: "00f8c7ca3495a11998b111688472e6da4d0e62908b4fe1c119e083354e98f6e7"
	I1126 19:39:41.585941   29753 cri.go:89] found id: "f7ace30aee7afbfe85c6ffe68e9b951559babdf0fc3cfb0b08e0ecaba1352160"
	I1126 19:39:41.585944   29753 cri.go:89] found id: "beecb43fac96b0dc52fe3fc4694c61a44626abc7a7d215655c08387e375b3e65"
	I1126 19:39:41.585947   29753 cri.go:89] found id: ""
	I1126 19:39:41.585979   29753 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 19:39:41.599332   29753 out.go:203] 
	W1126 19:39:41.600635   29753 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:39:41Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:39:41Z" level=error msg="open /run/runc: no such file or directory"
	
	W1126 19:39:41.600654   29753 out.go:285] * 
	* 
	W1126 19:39:41.603804   29753 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1126 19:39:41.605063   29753 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-368879 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-368879 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-368879 addons disable ingress --alsologtostderr -v=1: exit status 11 (232.970023ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1126 19:39:41.660215   29815 out.go:360] Setting OutFile to fd 1 ...
	I1126 19:39:41.660495   29815 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:39:41.660505   29815 out.go:374] Setting ErrFile to fd 2...
	I1126 19:39:41.660510   29815 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:39:41.660674   29815 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-10722/.minikube/bin
	I1126 19:39:41.660908   29815 mustload.go:66] Loading cluster: addons-368879
	I1126 19:39:41.661188   29815 config.go:182] Loaded profile config "addons-368879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:39:41.661207   29815 addons.go:622] checking whether the cluster is paused
	I1126 19:39:41.661291   29815 config.go:182] Loaded profile config "addons-368879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:39:41.661305   29815 host.go:66] Checking if "addons-368879" exists ...
	I1126 19:39:41.661641   29815 cli_runner.go:164] Run: docker container inspect addons-368879 --format={{.State.Status}}
	I1126 19:39:41.678830   29815 ssh_runner.go:195] Run: systemctl --version
	I1126 19:39:41.678896   29815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-368879
	I1126 19:39:41.696741   29815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/addons-368879/id_rsa Username:docker}
	I1126 19:39:41.792317   29815 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 19:39:41.792411   29815 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 19:39:41.819826   29815 cri.go:89] found id: "a63c63672d5b267b4f7a559029a7b378798ceab459016dfe8c5b5d17c2f1bf4a"
	I1126 19:39:41.819852   29815 cri.go:89] found id: "cff11b25559308e517bef61e7e64b2cf1d052d365ba751329767d36a353b778f"
	I1126 19:39:41.819856   29815 cri.go:89] found id: "212fd16c128eb496d22ee7050fd12a081b49ab88a5da4ad7ba0218f5c3a6afda"
	I1126 19:39:41.819861   29815 cri.go:89] found id: "9ad67d88c52b2a7f9372f1150442ba3a50bb8cf2680525e52a8619e4b2c8976b"
	I1126 19:39:41.819864   29815 cri.go:89] found id: "36ba93a3abe1946fafeb5c3585ccba2c0309402b73aa6b93fad1679ab9b8230c"
	I1126 19:39:41.819868   29815 cri.go:89] found id: "4197c12bab9b39cb37fcf577883d45acb6708a76bacaf452b81bbbac83ebc32d"
	I1126 19:39:41.819871   29815 cri.go:89] found id: "714962454fedbffb52b88389444757a4051639c470d89a7391a7d4f8507cd1ef"
	I1126 19:39:41.819874   29815 cri.go:89] found id: "603eac3a5db355f206ba3dc4ff6ed41cd4d2d57dd344b753ba2b220aea0ea935"
	I1126 19:39:41.819877   29815 cri.go:89] found id: "d97dad65e0c228f566bf65943a4547da6e571b1a6529c74a8ae4fca908aff071"
	I1126 19:39:41.819885   29815 cri.go:89] found id: "e825b2f37651c5a80788a75b90cfff51ac5bb5898b783c64714a77a867289740"
	I1126 19:39:41.819888   29815 cri.go:89] found id: "7dfc385a20d465a35afa574dc1af5a49eb3c8c1fa7ed51ee776a301a59c2361d"
	I1126 19:39:41.819900   29815 cri.go:89] found id: "47283ac77595bb9d7a2a45c36475aa798bdd174eb0fbf75c0f29ca447edce13c"
	I1126 19:39:41.819903   29815 cri.go:89] found id: "5a22bb7b95033629d4ca4f1e2bcfe31205813495c016ac6b92a54b3988f5aeb1"
	I1126 19:39:41.819906   29815 cri.go:89] found id: "efa536fb3d778fa3f66c962d9c414995c88acf53c34702183c66e5f69d36811f"
	I1126 19:39:41.819909   29815 cri.go:89] found id: "65470a1503151ad0a4d04f74207c75c03e574621ecbe88aa80a882447651e9dd"
	I1126 19:39:41.819916   29815 cri.go:89] found id: "eb50e6ab9debfa0a5504acf11c706fb150d3db77f0f345b4ad749fda6185bc6e"
	I1126 19:39:41.819922   29815 cri.go:89] found id: "719640c6c4cf6714ce19314de9295329020ce388e99867709ed34e6066f0a048"
	I1126 19:39:41.819926   29815 cri.go:89] found id: "d64fe5dcd9941dd86a15e1a24ecd4bcd3f1c12d60fa89d08729cb02ae1f3a859"
	I1126 19:39:41.819929   29815 cri.go:89] found id: "25e48df5dfb4bded98bc722a69d41cfb23ba0789fcd742e1bbb988ed73575038"
	I1126 19:39:41.819931   29815 cri.go:89] found id: "59ceeea3b62d8764e5784e2b3a3de748607f552e64062869f959f25d6d89b19c"
	I1126 19:39:41.819934   29815 cri.go:89] found id: "c71770537fdbd023344c0899f0ec3b332c10bdffa104f53bcb64a05e3456fdff"
	I1126 19:39:41.819937   29815 cri.go:89] found id: "6d9b40c465aff763d4d0644216d13f8019e5ff223b6cd4d430832c2712521310"
	I1126 19:39:41.819939   29815 cri.go:89] found id: "00f8c7ca3495a11998b111688472e6da4d0e62908b4fe1c119e083354e98f6e7"
	I1126 19:39:41.819942   29815 cri.go:89] found id: "f7ace30aee7afbfe85c6ffe68e9b951559babdf0fc3cfb0b08e0ecaba1352160"
	I1126 19:39:41.819945   29815 cri.go:89] found id: "beecb43fac96b0dc52fe3fc4694c61a44626abc7a7d215655c08387e375b3e65"
	I1126 19:39:41.819947   29815 cri.go:89] found id: ""
	I1126 19:39:41.819995   29815 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 19:39:41.833143   29815 out.go:203] 
	W1126 19:39:41.834278   29815 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:39:41Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:39:41Z" level=error msg="open /run/runc: no such file or directory"
	
	W1126 19:39:41.834301   29815 out.go:285] * 
	* 
	W1126 19:39:41.837257   29815 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1126 19:39:41.838402   29815 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-368879 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (146.62s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.46s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-rmxlg" [10883cf1-4d5e-4afb-9e4c-838894cc0d78] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.00349228s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-368879 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-368879 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (450.963133ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1126 19:37:34.380294   26310 out.go:360] Setting OutFile to fd 1 ...
	I1126 19:37:34.380630   26310 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:37:34.380645   26310 out.go:374] Setting ErrFile to fd 2...
	I1126 19:37:34.380651   26310 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:37:34.380944   26310 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-10722/.minikube/bin
	I1126 19:37:34.381225   26310 mustload.go:66] Loading cluster: addons-368879
	I1126 19:37:34.381674   26310 config.go:182] Loaded profile config "addons-368879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:37:34.381702   26310 addons.go:622] checking whether the cluster is paused
	I1126 19:37:34.381832   26310 config.go:182] Loaded profile config "addons-368879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:37:34.381857   26310 host.go:66] Checking if "addons-368879" exists ...
	I1126 19:37:34.382392   26310 cli_runner.go:164] Run: docker container inspect addons-368879 --format={{.State.Status}}
	I1126 19:37:34.402321   26310 ssh_runner.go:195] Run: systemctl --version
	I1126 19:37:34.402370   26310 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-368879
	I1126 19:37:34.420250   26310 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/addons-368879/id_rsa Username:docker}
	I1126 19:37:34.521175   26310 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 19:37:34.521270   26310 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 19:37:34.556393   26310 cri.go:89] found id: "cff11b25559308e517bef61e7e64b2cf1d052d365ba751329767d36a353b778f"
	I1126 19:37:34.556417   26310 cri.go:89] found id: "212fd16c128eb496d22ee7050fd12a081b49ab88a5da4ad7ba0218f5c3a6afda"
	I1126 19:37:34.556424   26310 cri.go:89] found id: "9ad67d88c52b2a7f9372f1150442ba3a50bb8cf2680525e52a8619e4b2c8976b"
	I1126 19:37:34.556430   26310 cri.go:89] found id: "36ba93a3abe1946fafeb5c3585ccba2c0309402b73aa6b93fad1679ab9b8230c"
	I1126 19:37:34.556434   26310 cri.go:89] found id: "4197c12bab9b39cb37fcf577883d45acb6708a76bacaf452b81bbbac83ebc32d"
	I1126 19:37:34.556447   26310 cri.go:89] found id: "714962454fedbffb52b88389444757a4051639c470d89a7391a7d4f8507cd1ef"
	I1126 19:37:34.556452   26310 cri.go:89] found id: "603eac3a5db355f206ba3dc4ff6ed41cd4d2d57dd344b753ba2b220aea0ea935"
	I1126 19:37:34.556470   26310 cri.go:89] found id: "d97dad65e0c228f566bf65943a4547da6e571b1a6529c74a8ae4fca908aff071"
	I1126 19:37:34.556475   26310 cri.go:89] found id: "e825b2f37651c5a80788a75b90cfff51ac5bb5898b783c64714a77a867289740"
	I1126 19:37:34.556484   26310 cri.go:89] found id: "7dfc385a20d465a35afa574dc1af5a49eb3c8c1fa7ed51ee776a301a59c2361d"
	I1126 19:37:34.556493   26310 cri.go:89] found id: "47283ac77595bb9d7a2a45c36475aa798bdd174eb0fbf75c0f29ca447edce13c"
	I1126 19:37:34.556497   26310 cri.go:89] found id: "5a22bb7b95033629d4ca4f1e2bcfe31205813495c016ac6b92a54b3988f5aeb1"
	I1126 19:37:34.556502   26310 cri.go:89] found id: "efa536fb3d778fa3f66c962d9c414995c88acf53c34702183c66e5f69d36811f"
	I1126 19:37:34.556506   26310 cri.go:89] found id: "65470a1503151ad0a4d04f74207c75c03e574621ecbe88aa80a882447651e9dd"
	I1126 19:37:34.556511   26310 cri.go:89] found id: "eb50e6ab9debfa0a5504acf11c706fb150d3db77f0f345b4ad749fda6185bc6e"
	I1126 19:37:34.556521   26310 cri.go:89] found id: "719640c6c4cf6714ce19314de9295329020ce388e99867709ed34e6066f0a048"
	I1126 19:37:34.556526   26310 cri.go:89] found id: "d64fe5dcd9941dd86a15e1a24ecd4bcd3f1c12d60fa89d08729cb02ae1f3a859"
	I1126 19:37:34.556530   26310 cri.go:89] found id: "25e48df5dfb4bded98bc722a69d41cfb23ba0789fcd742e1bbb988ed73575038"
	I1126 19:37:34.556532   26310 cri.go:89] found id: "59ceeea3b62d8764e5784e2b3a3de748607f552e64062869f959f25d6d89b19c"
	I1126 19:37:34.556535   26310 cri.go:89] found id: "c71770537fdbd023344c0899f0ec3b332c10bdffa104f53bcb64a05e3456fdff"
	I1126 19:37:34.556538   26310 cri.go:89] found id: "6d9b40c465aff763d4d0644216d13f8019e5ff223b6cd4d430832c2712521310"
	I1126 19:37:34.556541   26310 cri.go:89] found id: "00f8c7ca3495a11998b111688472e6da4d0e62908b4fe1c119e083354e98f6e7"
	I1126 19:37:34.556545   26310 cri.go:89] found id: "f7ace30aee7afbfe85c6ffe68e9b951559babdf0fc3cfb0b08e0ecaba1352160"
	I1126 19:37:34.556550   26310 cri.go:89] found id: "beecb43fac96b0dc52fe3fc4694c61a44626abc7a7d215655c08387e375b3e65"
	I1126 19:37:34.556554   26310 cri.go:89] found id: ""
	I1126 19:37:34.556603   26310 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 19:37:34.590735   26310 out.go:203] 
	W1126 19:37:34.607702   26310 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:37:34Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:37:34Z" level=error msg="open /run/runc: no such file or directory"
	
	W1126 19:37:34.607722   26310 out.go:285] * 
	* 
	W1126 19:37:34.612752   26310 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1126 19:37:34.648513   26310 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-368879 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.46s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.3s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.003825ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-mnzc2" [ef15070b-c3d9-4b20-b2b3-e53708b4a9a6] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.002906155s
addons_test.go:463: (dbg) Run:  kubectl --context addons-368879 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-368879 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-368879 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (240.448496ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1126 19:37:20.337703   25267 out.go:360] Setting OutFile to fd 1 ...
	I1126 19:37:20.337988   25267 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:37:20.338000   25267 out.go:374] Setting ErrFile to fd 2...
	I1126 19:37:20.338004   25267 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:37:20.338225   25267 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-10722/.minikube/bin
	I1126 19:37:20.338541   25267 mustload.go:66] Loading cluster: addons-368879
	I1126 19:37:20.338863   25267 config.go:182] Loaded profile config "addons-368879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:37:20.338883   25267 addons.go:622] checking whether the cluster is paused
	I1126 19:37:20.338984   25267 config.go:182] Loaded profile config "addons-368879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:37:20.339007   25267 host.go:66] Checking if "addons-368879" exists ...
	I1126 19:37:20.339421   25267 cli_runner.go:164] Run: docker container inspect addons-368879 --format={{.State.Status}}
	I1126 19:37:20.358003   25267 ssh_runner.go:195] Run: systemctl --version
	I1126 19:37:20.358043   25267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-368879
	I1126 19:37:20.374999   25267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/addons-368879/id_rsa Username:docker}
	I1126 19:37:20.472380   25267 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 19:37:20.472478   25267 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 19:37:20.500790   25267 cri.go:89] found id: "cff11b25559308e517bef61e7e64b2cf1d052d365ba751329767d36a353b778f"
	I1126 19:37:20.500818   25267 cri.go:89] found id: "212fd16c128eb496d22ee7050fd12a081b49ab88a5da4ad7ba0218f5c3a6afda"
	I1126 19:37:20.500822   25267 cri.go:89] found id: "9ad67d88c52b2a7f9372f1150442ba3a50bb8cf2680525e52a8619e4b2c8976b"
	I1126 19:37:20.500825   25267 cri.go:89] found id: "36ba93a3abe1946fafeb5c3585ccba2c0309402b73aa6b93fad1679ab9b8230c"
	I1126 19:37:20.500828   25267 cri.go:89] found id: "4197c12bab9b39cb37fcf577883d45acb6708a76bacaf452b81bbbac83ebc32d"
	I1126 19:37:20.500836   25267 cri.go:89] found id: "714962454fedbffb52b88389444757a4051639c470d89a7391a7d4f8507cd1ef"
	I1126 19:37:20.500839   25267 cri.go:89] found id: "603eac3a5db355f206ba3dc4ff6ed41cd4d2d57dd344b753ba2b220aea0ea935"
	I1126 19:37:20.500842   25267 cri.go:89] found id: "d97dad65e0c228f566bf65943a4547da6e571b1a6529c74a8ae4fca908aff071"
	I1126 19:37:20.500845   25267 cri.go:89] found id: "e825b2f37651c5a80788a75b90cfff51ac5bb5898b783c64714a77a867289740"
	I1126 19:37:20.500854   25267 cri.go:89] found id: "7dfc385a20d465a35afa574dc1af5a49eb3c8c1fa7ed51ee776a301a59c2361d"
	I1126 19:37:20.500861   25267 cri.go:89] found id: "47283ac77595bb9d7a2a45c36475aa798bdd174eb0fbf75c0f29ca447edce13c"
	I1126 19:37:20.500864   25267 cri.go:89] found id: "5a22bb7b95033629d4ca4f1e2bcfe31205813495c016ac6b92a54b3988f5aeb1"
	I1126 19:37:20.500866   25267 cri.go:89] found id: "efa536fb3d778fa3f66c962d9c414995c88acf53c34702183c66e5f69d36811f"
	I1126 19:37:20.500869   25267 cri.go:89] found id: "65470a1503151ad0a4d04f74207c75c03e574621ecbe88aa80a882447651e9dd"
	I1126 19:37:20.500875   25267 cri.go:89] found id: "eb50e6ab9debfa0a5504acf11c706fb150d3db77f0f345b4ad749fda6185bc6e"
	I1126 19:37:20.500887   25267 cri.go:89] found id: "719640c6c4cf6714ce19314de9295329020ce388e99867709ed34e6066f0a048"
	I1126 19:37:20.500894   25267 cri.go:89] found id: "d64fe5dcd9941dd86a15e1a24ecd4bcd3f1c12d60fa89d08729cb02ae1f3a859"
	I1126 19:37:20.500899   25267 cri.go:89] found id: "25e48df5dfb4bded98bc722a69d41cfb23ba0789fcd742e1bbb988ed73575038"
	I1126 19:37:20.500907   25267 cri.go:89] found id: "59ceeea3b62d8764e5784e2b3a3de748607f552e64062869f959f25d6d89b19c"
	I1126 19:37:20.500910   25267 cri.go:89] found id: "c71770537fdbd023344c0899f0ec3b332c10bdffa104f53bcb64a05e3456fdff"
	I1126 19:37:20.500912   25267 cri.go:89] found id: "6d9b40c465aff763d4d0644216d13f8019e5ff223b6cd4d430832c2712521310"
	I1126 19:37:20.500915   25267 cri.go:89] found id: "00f8c7ca3495a11998b111688472e6da4d0e62908b4fe1c119e083354e98f6e7"
	I1126 19:37:20.500918   25267 cri.go:89] found id: "f7ace30aee7afbfe85c6ffe68e9b951559babdf0fc3cfb0b08e0ecaba1352160"
	I1126 19:37:20.500921   25267 cri.go:89] found id: "beecb43fac96b0dc52fe3fc4694c61a44626abc7a7d215655c08387e375b3e65"
	I1126 19:37:20.500923   25267 cri.go:89] found id: ""
	I1126 19:37:20.500977   25267 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 19:37:20.514322   25267 out.go:203] 
	W1126 19:37:20.515454   25267 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:37:20Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:37:20Z" level=error msg="open /run/runc: no such file or directory"
	
	W1126 19:37:20.515534   25267 out.go:285] * 
	* 
	W1126 19:37:20.518397   25267 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1126 19:37:20.519579   25267 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-368879 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.30s)

                                                
                                    
x
+
TestAddons/parallel/CSI (40.95s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1126 19:37:34.372982   14258 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1126 19:37:34.376362   14258 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1126 19:37:34.376388   14258 kapi.go:107] duration metric: took 3.418124ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.429342ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-368879 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-368879 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-368879 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-368879 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [c635c6fe-8d87-4aa4-8f89-1120a868d055] Pending
helpers_test.go:352: "task-pv-pod" [c635c6fe-8d87-4aa4-8f89-1120a868d055] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [c635c6fe-8d87-4aa4-8f89-1120a868d055] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.003348795s
addons_test.go:572: (dbg) Run:  kubectl --context addons-368879 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-368879 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-368879 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-368879 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-368879 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-368879 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-368879 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-368879 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-368879 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-368879 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-368879 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-368879 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-368879 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-368879 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-368879 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-368879 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-368879 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-368879 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-368879 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-368879 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-368879 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-368879 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-368879 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-368879 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-368879 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-368879 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-368879 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [28e1cec0-f222-4dc1-9c70-d6a1889a2ca3] Pending
helpers_test.go:352: "task-pv-pod-restore" [28e1cec0-f222-4dc1-9c70-d6a1889a2ca3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [28e1cec0-f222-4dc1-9c70-d6a1889a2ca3] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003429596s
addons_test.go:614: (dbg) Run:  kubectl --context addons-368879 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-368879 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-368879 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-368879 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-368879 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (238.027556ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1126 19:38:14.900570   27701 out.go:360] Setting OutFile to fd 1 ...
	I1126 19:38:14.900718   27701 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:38:14.900730   27701 out.go:374] Setting ErrFile to fd 2...
	I1126 19:38:14.900734   27701 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:38:14.900929   27701 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-10722/.minikube/bin
	I1126 19:38:14.901160   27701 mustload.go:66] Loading cluster: addons-368879
	I1126 19:38:14.901482   27701 config.go:182] Loaded profile config "addons-368879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:38:14.901500   27701 addons.go:622] checking whether the cluster is paused
	I1126 19:38:14.901579   27701 config.go:182] Loaded profile config "addons-368879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:38:14.901593   27701 host.go:66] Checking if "addons-368879" exists ...
	I1126 19:38:14.901971   27701 cli_runner.go:164] Run: docker container inspect addons-368879 --format={{.State.Status}}
	I1126 19:38:14.919814   27701 ssh_runner.go:195] Run: systemctl --version
	I1126 19:38:14.919865   27701 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-368879
	I1126 19:38:14.937275   27701 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/addons-368879/id_rsa Username:docker}
	I1126 19:38:15.034519   27701 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 19:38:15.034611   27701 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 19:38:15.063635   27701 cri.go:89] found id: "cff11b25559308e517bef61e7e64b2cf1d052d365ba751329767d36a353b778f"
	I1126 19:38:15.063661   27701 cri.go:89] found id: "212fd16c128eb496d22ee7050fd12a081b49ab88a5da4ad7ba0218f5c3a6afda"
	I1126 19:38:15.063667   27701 cri.go:89] found id: "9ad67d88c52b2a7f9372f1150442ba3a50bb8cf2680525e52a8619e4b2c8976b"
	I1126 19:38:15.063673   27701 cri.go:89] found id: "36ba93a3abe1946fafeb5c3585ccba2c0309402b73aa6b93fad1679ab9b8230c"
	I1126 19:38:15.063676   27701 cri.go:89] found id: "4197c12bab9b39cb37fcf577883d45acb6708a76bacaf452b81bbbac83ebc32d"
	I1126 19:38:15.063680   27701 cri.go:89] found id: "714962454fedbffb52b88389444757a4051639c470d89a7391a7d4f8507cd1ef"
	I1126 19:38:15.063683   27701 cri.go:89] found id: "603eac3a5db355f206ba3dc4ff6ed41cd4d2d57dd344b753ba2b220aea0ea935"
	I1126 19:38:15.063686   27701 cri.go:89] found id: "d97dad65e0c228f566bf65943a4547da6e571b1a6529c74a8ae4fca908aff071"
	I1126 19:38:15.063689   27701 cri.go:89] found id: "e825b2f37651c5a80788a75b90cfff51ac5bb5898b783c64714a77a867289740"
	I1126 19:38:15.063694   27701 cri.go:89] found id: "7dfc385a20d465a35afa574dc1af5a49eb3c8c1fa7ed51ee776a301a59c2361d"
	I1126 19:38:15.063698   27701 cri.go:89] found id: "47283ac77595bb9d7a2a45c36475aa798bdd174eb0fbf75c0f29ca447edce13c"
	I1126 19:38:15.063702   27701 cri.go:89] found id: "5a22bb7b95033629d4ca4f1e2bcfe31205813495c016ac6b92a54b3988f5aeb1"
	I1126 19:38:15.063707   27701 cri.go:89] found id: "efa536fb3d778fa3f66c962d9c414995c88acf53c34702183c66e5f69d36811f"
	I1126 19:38:15.063710   27701 cri.go:89] found id: "65470a1503151ad0a4d04f74207c75c03e574621ecbe88aa80a882447651e9dd"
	I1126 19:38:15.063713   27701 cri.go:89] found id: "eb50e6ab9debfa0a5504acf11c706fb150d3db77f0f345b4ad749fda6185bc6e"
	I1126 19:38:15.063718   27701 cri.go:89] found id: "719640c6c4cf6714ce19314de9295329020ce388e99867709ed34e6066f0a048"
	I1126 19:38:15.063723   27701 cri.go:89] found id: "d64fe5dcd9941dd86a15e1a24ecd4bcd3f1c12d60fa89d08729cb02ae1f3a859"
	I1126 19:38:15.063727   27701 cri.go:89] found id: "25e48df5dfb4bded98bc722a69d41cfb23ba0789fcd742e1bbb988ed73575038"
	I1126 19:38:15.063730   27701 cri.go:89] found id: "59ceeea3b62d8764e5784e2b3a3de748607f552e64062869f959f25d6d89b19c"
	I1126 19:38:15.063732   27701 cri.go:89] found id: "c71770537fdbd023344c0899f0ec3b332c10bdffa104f53bcb64a05e3456fdff"
	I1126 19:38:15.063738   27701 cri.go:89] found id: "6d9b40c465aff763d4d0644216d13f8019e5ff223b6cd4d430832c2712521310"
	I1126 19:38:15.063741   27701 cri.go:89] found id: "00f8c7ca3495a11998b111688472e6da4d0e62908b4fe1c119e083354e98f6e7"
	I1126 19:38:15.063744   27701 cri.go:89] found id: "f7ace30aee7afbfe85c6ffe68e9b951559babdf0fc3cfb0b08e0ecaba1352160"
	I1126 19:38:15.063747   27701 cri.go:89] found id: "beecb43fac96b0dc52fe3fc4694c61a44626abc7a7d215655c08387e375b3e65"
	I1126 19:38:15.063749   27701 cri.go:89] found id: ""
	I1126 19:38:15.063787   27701 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 19:38:15.076594   27701 out.go:203] 
	W1126 19:38:15.077986   27701 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:38:15Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:38:15Z" level=error msg="open /run/runc: no such file or directory"
	
	W1126 19:38:15.078011   27701 out.go:285] * 
	* 
	W1126 19:38:15.081747   27701 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1126 19:38:15.083047   27701 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-368879 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-368879 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-368879 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (233.945764ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1126 19:38:15.140159   27765 out.go:360] Setting OutFile to fd 1 ...
	I1126 19:38:15.140477   27765 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:38:15.140488   27765 out.go:374] Setting ErrFile to fd 2...
	I1126 19:38:15.140494   27765 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:38:15.140667   27765 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-10722/.minikube/bin
	I1126 19:38:15.140922   27765 mustload.go:66] Loading cluster: addons-368879
	I1126 19:38:15.141215   27765 config.go:182] Loaded profile config "addons-368879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:38:15.141239   27765 addons.go:622] checking whether the cluster is paused
	I1126 19:38:15.141337   27765 config.go:182] Loaded profile config "addons-368879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:38:15.141356   27765 host.go:66] Checking if "addons-368879" exists ...
	I1126 19:38:15.141720   27765 cli_runner.go:164] Run: docker container inspect addons-368879 --format={{.State.Status}}
	I1126 19:38:15.159615   27765 ssh_runner.go:195] Run: systemctl --version
	I1126 19:38:15.159659   27765 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-368879
	I1126 19:38:15.176800   27765 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/addons-368879/id_rsa Username:docker}
	I1126 19:38:15.272203   27765 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 19:38:15.272279   27765 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 19:38:15.299340   27765 cri.go:89] found id: "cff11b25559308e517bef61e7e64b2cf1d052d365ba751329767d36a353b778f"
	I1126 19:38:15.299364   27765 cri.go:89] found id: "212fd16c128eb496d22ee7050fd12a081b49ab88a5da4ad7ba0218f5c3a6afda"
	I1126 19:38:15.299368   27765 cri.go:89] found id: "9ad67d88c52b2a7f9372f1150442ba3a50bb8cf2680525e52a8619e4b2c8976b"
	I1126 19:38:15.299381   27765 cri.go:89] found id: "36ba93a3abe1946fafeb5c3585ccba2c0309402b73aa6b93fad1679ab9b8230c"
	I1126 19:38:15.299384   27765 cri.go:89] found id: "4197c12bab9b39cb37fcf577883d45acb6708a76bacaf452b81bbbac83ebc32d"
	I1126 19:38:15.299389   27765 cri.go:89] found id: "714962454fedbffb52b88389444757a4051639c470d89a7391a7d4f8507cd1ef"
	I1126 19:38:15.299391   27765 cri.go:89] found id: "603eac3a5db355f206ba3dc4ff6ed41cd4d2d57dd344b753ba2b220aea0ea935"
	I1126 19:38:15.299394   27765 cri.go:89] found id: "d97dad65e0c228f566bf65943a4547da6e571b1a6529c74a8ae4fca908aff071"
	I1126 19:38:15.299398   27765 cri.go:89] found id: "e825b2f37651c5a80788a75b90cfff51ac5bb5898b783c64714a77a867289740"
	I1126 19:38:15.299409   27765 cri.go:89] found id: "7dfc385a20d465a35afa574dc1af5a49eb3c8c1fa7ed51ee776a301a59c2361d"
	I1126 19:38:15.299415   27765 cri.go:89] found id: "47283ac77595bb9d7a2a45c36475aa798bdd174eb0fbf75c0f29ca447edce13c"
	I1126 19:38:15.299418   27765 cri.go:89] found id: "5a22bb7b95033629d4ca4f1e2bcfe31205813495c016ac6b92a54b3988f5aeb1"
	I1126 19:38:15.299421   27765 cri.go:89] found id: "efa536fb3d778fa3f66c962d9c414995c88acf53c34702183c66e5f69d36811f"
	I1126 19:38:15.299424   27765 cri.go:89] found id: "65470a1503151ad0a4d04f74207c75c03e574621ecbe88aa80a882447651e9dd"
	I1126 19:38:15.299427   27765 cri.go:89] found id: "eb50e6ab9debfa0a5504acf11c706fb150d3db77f0f345b4ad749fda6185bc6e"
	I1126 19:38:15.299434   27765 cri.go:89] found id: "719640c6c4cf6714ce19314de9295329020ce388e99867709ed34e6066f0a048"
	I1126 19:38:15.299439   27765 cri.go:89] found id: "d64fe5dcd9941dd86a15e1a24ecd4bcd3f1c12d60fa89d08729cb02ae1f3a859"
	I1126 19:38:15.299444   27765 cri.go:89] found id: "25e48df5dfb4bded98bc722a69d41cfb23ba0789fcd742e1bbb988ed73575038"
	I1126 19:38:15.299447   27765 cri.go:89] found id: "59ceeea3b62d8764e5784e2b3a3de748607f552e64062869f959f25d6d89b19c"
	I1126 19:38:15.299449   27765 cri.go:89] found id: "c71770537fdbd023344c0899f0ec3b332c10bdffa104f53bcb64a05e3456fdff"
	I1126 19:38:15.299452   27765 cri.go:89] found id: "6d9b40c465aff763d4d0644216d13f8019e5ff223b6cd4d430832c2712521310"
	I1126 19:38:15.299471   27765 cri.go:89] found id: "00f8c7ca3495a11998b111688472e6da4d0e62908b4fe1c119e083354e98f6e7"
	I1126 19:38:15.299476   27765 cri.go:89] found id: "f7ace30aee7afbfe85c6ffe68e9b951559babdf0fc3cfb0b08e0ecaba1352160"
	I1126 19:38:15.299480   27765 cri.go:89] found id: "beecb43fac96b0dc52fe3fc4694c61a44626abc7a7d215655c08387e375b3e65"
	I1126 19:38:15.299484   27765 cri.go:89] found id: ""
	I1126 19:38:15.299524   27765 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 19:38:15.312576   27765 out.go:203] 
	W1126 19:38:15.313753   27765 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:38:15Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:38:15Z" level=error msg="open /run/runc: no such file or directory"
	
	W1126 19:38:15.313778   27765 out.go:285] * 
	* 
	W1126 19:38:15.316692   27765 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1126 19:38:15.317797   27765 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-368879 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (40.95s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.62s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-368879 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-368879 --alsologtostderr -v=1: exit status 11 (247.776694ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1126 19:37:15.280298   23949 out.go:360] Setting OutFile to fd 1 ...
	I1126 19:37:15.280444   23949 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:37:15.280470   23949 out.go:374] Setting ErrFile to fd 2...
	I1126 19:37:15.280478   23949 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:37:15.280776   23949 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-10722/.minikube/bin
	I1126 19:37:15.281122   23949 mustload.go:66] Loading cluster: addons-368879
	I1126 19:37:15.281573   23949 config.go:182] Loaded profile config "addons-368879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:37:15.281597   23949 addons.go:622] checking whether the cluster is paused
	I1126 19:37:15.281685   23949 config.go:182] Loaded profile config "addons-368879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:37:15.281699   23949 host.go:66] Checking if "addons-368879" exists ...
	I1126 19:37:15.282092   23949 cli_runner.go:164] Run: docker container inspect addons-368879 --format={{.State.Status}}
	I1126 19:37:15.300279   23949 ssh_runner.go:195] Run: systemctl --version
	I1126 19:37:15.300334   23949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-368879
	I1126 19:37:15.316866   23949 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/addons-368879/id_rsa Username:docker}
	I1126 19:37:15.413646   23949 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 19:37:15.413729   23949 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 19:37:15.444193   23949 cri.go:89] found id: "cff11b25559308e517bef61e7e64b2cf1d052d365ba751329767d36a353b778f"
	I1126 19:37:15.444214   23949 cri.go:89] found id: "212fd16c128eb496d22ee7050fd12a081b49ab88a5da4ad7ba0218f5c3a6afda"
	I1126 19:37:15.444218   23949 cri.go:89] found id: "9ad67d88c52b2a7f9372f1150442ba3a50bb8cf2680525e52a8619e4b2c8976b"
	I1126 19:37:15.444222   23949 cri.go:89] found id: "36ba93a3abe1946fafeb5c3585ccba2c0309402b73aa6b93fad1679ab9b8230c"
	I1126 19:37:15.444225   23949 cri.go:89] found id: "4197c12bab9b39cb37fcf577883d45acb6708a76bacaf452b81bbbac83ebc32d"
	I1126 19:37:15.444232   23949 cri.go:89] found id: "714962454fedbffb52b88389444757a4051639c470d89a7391a7d4f8507cd1ef"
	I1126 19:37:15.444235   23949 cri.go:89] found id: "603eac3a5db355f206ba3dc4ff6ed41cd4d2d57dd344b753ba2b220aea0ea935"
	I1126 19:37:15.444238   23949 cri.go:89] found id: "d97dad65e0c228f566bf65943a4547da6e571b1a6529c74a8ae4fca908aff071"
	I1126 19:37:15.444241   23949 cri.go:89] found id: "e825b2f37651c5a80788a75b90cfff51ac5bb5898b783c64714a77a867289740"
	I1126 19:37:15.444252   23949 cri.go:89] found id: "7dfc385a20d465a35afa574dc1af5a49eb3c8c1fa7ed51ee776a301a59c2361d"
	I1126 19:37:15.444260   23949 cri.go:89] found id: "47283ac77595bb9d7a2a45c36475aa798bdd174eb0fbf75c0f29ca447edce13c"
	I1126 19:37:15.444263   23949 cri.go:89] found id: "5a22bb7b95033629d4ca4f1e2bcfe31205813495c016ac6b92a54b3988f5aeb1"
	I1126 19:37:15.444266   23949 cri.go:89] found id: "efa536fb3d778fa3f66c962d9c414995c88acf53c34702183c66e5f69d36811f"
	I1126 19:37:15.444268   23949 cri.go:89] found id: "65470a1503151ad0a4d04f74207c75c03e574621ecbe88aa80a882447651e9dd"
	I1126 19:37:15.444271   23949 cri.go:89] found id: "eb50e6ab9debfa0a5504acf11c706fb150d3db77f0f345b4ad749fda6185bc6e"
	I1126 19:37:15.444282   23949 cri.go:89] found id: "719640c6c4cf6714ce19314de9295329020ce388e99867709ed34e6066f0a048"
	I1126 19:37:15.444289   23949 cri.go:89] found id: "d64fe5dcd9941dd86a15e1a24ecd4bcd3f1c12d60fa89d08729cb02ae1f3a859"
	I1126 19:37:15.444293   23949 cri.go:89] found id: "25e48df5dfb4bded98bc722a69d41cfb23ba0789fcd742e1bbb988ed73575038"
	I1126 19:37:15.444296   23949 cri.go:89] found id: "59ceeea3b62d8764e5784e2b3a3de748607f552e64062869f959f25d6d89b19c"
	I1126 19:37:15.444299   23949 cri.go:89] found id: "c71770537fdbd023344c0899f0ec3b332c10bdffa104f53bcb64a05e3456fdff"
	I1126 19:37:15.444305   23949 cri.go:89] found id: "6d9b40c465aff763d4d0644216d13f8019e5ff223b6cd4d430832c2712521310"
	I1126 19:37:15.444310   23949 cri.go:89] found id: "00f8c7ca3495a11998b111688472e6da4d0e62908b4fe1c119e083354e98f6e7"
	I1126 19:37:15.444314   23949 cri.go:89] found id: "f7ace30aee7afbfe85c6ffe68e9b951559babdf0fc3cfb0b08e0ecaba1352160"
	I1126 19:37:15.444321   23949 cri.go:89] found id: "beecb43fac96b0dc52fe3fc4694c61a44626abc7a7d215655c08387e375b3e65"
	I1126 19:37:15.444326   23949 cri.go:89] found id: ""
	I1126 19:37:15.444389   23949 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 19:37:15.458478   23949 out.go:203] 
	W1126 19:37:15.459734   23949 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:37:15Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:37:15Z" level=error msg="open /run/runc: no such file or directory"
	
	W1126 19:37:15.459754   23949 out.go:285] * 
	* 
	W1126 19:37:15.463166   23949 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1126 19:37:15.464399   23949 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-368879 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-368879
helpers_test.go:243: (dbg) docker inspect addons-368879:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c5a5d6b5ca141e91cffb14dacb2b3c8e7898d84341498be347204c5b7ef1bf1f",
	        "Created": "2025-11-26T19:35:21.207538359Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 16263,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-26T19:35:21.242595793Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/c5a5d6b5ca141e91cffb14dacb2b3c8e7898d84341498be347204c5b7ef1bf1f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c5a5d6b5ca141e91cffb14dacb2b3c8e7898d84341498be347204c5b7ef1bf1f/hostname",
	        "HostsPath": "/var/lib/docker/containers/c5a5d6b5ca141e91cffb14dacb2b3c8e7898d84341498be347204c5b7ef1bf1f/hosts",
	        "LogPath": "/var/lib/docker/containers/c5a5d6b5ca141e91cffb14dacb2b3c8e7898d84341498be347204c5b7ef1bf1f/c5a5d6b5ca141e91cffb14dacb2b3c8e7898d84341498be347204c5b7ef1bf1f-json.log",
	        "Name": "/addons-368879",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-368879:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-368879",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c5a5d6b5ca141e91cffb14dacb2b3c8e7898d84341498be347204c5b7ef1bf1f",
	                "LowerDir": "/var/lib/docker/overlay2/0d523fa88ed8896b4c412fe32c96c7888feffb0b8675ad8f5cecec48a7a18c10-init/diff:/var/lib/docker/overlay2/1661d775281cb7314a654558ee2c4e5880ffafb3a27a085032449cd60a68753a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0d523fa88ed8896b4c412fe32c96c7888feffb0b8675ad8f5cecec48a7a18c10/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0d523fa88ed8896b4c412fe32c96c7888feffb0b8675ad8f5cecec48a7a18c10/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0d523fa88ed8896b4c412fe32c96c7888feffb0b8675ad8f5cecec48a7a18c10/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-368879",
	                "Source": "/var/lib/docker/volumes/addons-368879/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-368879",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-368879",
	                "name.minikube.sigs.k8s.io": "addons-368879",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "1045bd9f6d8c38e8a848c1c51bb8163e146e1d17c95af24aedd024c0c52fdf6c",
	            "SandboxKey": "/var/run/docker/netns/1045bd9f6d8c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-368879": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d7d50d131ba94f9c1dcd0658d7aa81e19dda84f0c78ad10918d150767794fbb9",
	                    "EndpointID": "ba2b32dd65fd6c3b57eff8942bcf5fb1a66a971fa8132ac8c556cafc6c58b49d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "46:dc:7a:ab:0a:c1",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-368879",
	                        "c5a5d6b5ca14"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-368879 -n addons-368879
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-368879 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-368879 logs -n 25: (1.134256537s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-179609 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-179609   │ jenkins │ v1.37.0 │ 26 Nov 25 19:34 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 26 Nov 25 19:34 UTC │ 26 Nov 25 19:34 UTC │
	│ delete  │ -p download-only-179609                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-179609   │ jenkins │ v1.37.0 │ 26 Nov 25 19:34 UTC │ 26 Nov 25 19:34 UTC │
	│ start   │ -o=json --download-only -p download-only-602722 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-602722   │ jenkins │ v1.37.0 │ 26 Nov 25 19:34 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 26 Nov 25 19:34 UTC │ 26 Nov 25 19:34 UTC │
	│ delete  │ -p download-only-602722                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-602722   │ jenkins │ v1.37.0 │ 26 Nov 25 19:34 UTC │ 26 Nov 25 19:34 UTC │
	│ delete  │ -p download-only-179609                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-179609   │ jenkins │ v1.37.0 │ 26 Nov 25 19:34 UTC │ 26 Nov 25 19:34 UTC │
	│ delete  │ -p download-only-602722                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-602722   │ jenkins │ v1.37.0 │ 26 Nov 25 19:34 UTC │ 26 Nov 25 19:34 UTC │
	│ start   │ --download-only -p download-docker-444715 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-444715 │ jenkins │ v1.37.0 │ 26 Nov 25 19:34 UTC │                     │
	│ delete  │ -p download-docker-444715                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-444715 │ jenkins │ v1.37.0 │ 26 Nov 25 19:34 UTC │ 26 Nov 25 19:34 UTC │
	│ start   │ --download-only -p binary-mirror-671361 --alsologtostderr --binary-mirror http://127.0.0.1:46231 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-671361   │ jenkins │ v1.37.0 │ 26 Nov 25 19:34 UTC │                     │
	│ delete  │ -p binary-mirror-671361                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-671361   │ jenkins │ v1.37.0 │ 26 Nov 25 19:34 UTC │ 26 Nov 25 19:34 UTC │
	│ addons  │ enable dashboard -p addons-368879                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-368879          │ jenkins │ v1.37.0 │ 26 Nov 25 19:34 UTC │                     │
	│ addons  │ disable dashboard -p addons-368879                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-368879          │ jenkins │ v1.37.0 │ 26 Nov 25 19:34 UTC │                     │
	│ start   │ -p addons-368879 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-368879          │ jenkins │ v1.37.0 │ 26 Nov 25 19:34 UTC │ 26 Nov 25 19:37 UTC │
	│ addons  │ addons-368879 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-368879          │ jenkins │ v1.37.0 │ 26 Nov 25 19:37 UTC │                     │
	│ addons  │ addons-368879 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-368879          │ jenkins │ v1.37.0 │ 26 Nov 25 19:37 UTC │                     │
	│ addons  │ enable headlamp -p addons-368879 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-368879          │ jenkins │ v1.37.0 │ 26 Nov 25 19:37 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/26 19:34:58
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1126 19:34:58.155861   15626 out.go:360] Setting OutFile to fd 1 ...
	I1126 19:34:58.156078   15626 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:34:58.156085   15626 out.go:374] Setting ErrFile to fd 2...
	I1126 19:34:58.156089   15626 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:34:58.156283   15626 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-10722/.minikube/bin
	I1126 19:34:58.156732   15626 out.go:368] Setting JSON to false
	I1126 19:34:58.157470   15626 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1048,"bootTime":1764184650,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1126 19:34:58.157513   15626 start.go:143] virtualization: kvm guest
	I1126 19:34:58.159197   15626 out.go:179] * [addons-368879] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1126 19:34:58.160360   15626 notify.go:221] Checking for updates...
	I1126 19:34:58.160381   15626 out.go:179]   - MINIKUBE_LOCATION=21974
	I1126 19:34:58.161556   15626 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1126 19:34:58.162705   15626 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21974-10722/kubeconfig
	I1126 19:34:58.163709   15626 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-10722/.minikube
	I1126 19:34:58.164666   15626 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1126 19:34:58.165667   15626 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1126 19:34:58.167022   15626 driver.go:422] Setting default libvirt URI to qemu:///system
	I1126 19:34:58.189593   15626 docker.go:124] docker version: linux-29.0.4:Docker Engine - Community
	I1126 19:34:58.189698   15626 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 19:34:58.244787   15626 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-26 19:34:58.235994448 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1126 19:34:58.244876   15626 docker.go:319] overlay module found
	I1126 19:34:58.246333   15626 out.go:179] * Using the docker driver based on user configuration
	I1126 19:34:58.247200   15626 start.go:309] selected driver: docker
	I1126 19:34:58.247212   15626 start.go:927] validating driver "docker" against <nil>
	I1126 19:34:58.247221   15626 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1126 19:34:58.247723   15626 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 19:34:58.298416   15626 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-26 19:34:58.290008365 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1126 19:34:58.298577   15626 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1126 19:34:58.298779   15626 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1126 19:34:58.300228   15626 out.go:179] * Using Docker driver with root privileges
	I1126 19:34:58.301260   15626 cni.go:84] Creating CNI manager for ""
	I1126 19:34:58.301317   15626 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 19:34:58.301328   15626 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1126 19:34:58.301387   15626 start.go:353] cluster config:
	{Name:addons-368879 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-368879 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1126 19:34:58.302546   15626 out.go:179] * Starting "addons-368879" primary control-plane node in "addons-368879" cluster
	I1126 19:34:58.303464   15626 cache.go:134] Beginning downloading kic base image for docker with crio
	I1126 19:34:58.304550   15626 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1126 19:34:58.305549   15626 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 19:34:58.305574   15626 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21974-10722/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1126 19:34:58.305581   15626 cache.go:65] Caching tarball of preloaded images
	I1126 19:34:58.305640   15626 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1126 19:34:58.305667   15626 preload.go:238] Found /home/jenkins/minikube-integration/21974-10722/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1126 19:34:58.305675   15626 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1126 19:34:58.305979   15626 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/config.json ...
	I1126 19:34:58.306008   15626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/config.json: {Name:mkf0e501ca958c4c4e8ce566039c46c9b04d2c95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:34:58.320818   15626 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b to local cache
	I1126 19:34:58.320924   15626 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local cache directory
	I1126 19:34:58.320946   15626 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local cache directory, skipping pull
	I1126 19:34:58.320952   15626 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in cache, skipping pull
	I1126 19:34:58.320958   15626 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b as a tarball
	I1126 19:34:58.320963   15626 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b from local cache
	I1126 19:35:10.156164   15626 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b from cached tarball
	I1126 19:35:10.156199   15626 cache.go:243] Successfully downloaded all kic artifacts
	I1126 19:35:10.156240   15626 start.go:360] acquireMachinesLock for addons-368879: {Name:mk3b87926377a18b5a2efa47c95e4b5d36fee531 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1126 19:35:10.156337   15626 start.go:364] duration metric: took 75.941µs to acquireMachinesLock for "addons-368879"
	I1126 19:35:10.156368   15626 start.go:93] Provisioning new machine with config: &{Name:addons-368879 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-368879 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1126 19:35:10.156437   15626 start.go:125] createHost starting for "" (driver="docker")
	I1126 19:35:10.157865   15626 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1126 19:35:10.158055   15626 start.go:159] libmachine.API.Create for "addons-368879" (driver="docker")
	I1126 19:35:10.158092   15626 client.go:173] LocalClient.Create starting
	I1126 19:35:10.158227   15626 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem
	I1126 19:35:10.246048   15626 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem
	I1126 19:35:10.323163   15626 cli_runner.go:164] Run: docker network inspect addons-368879 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1126 19:35:10.340157   15626 cli_runner.go:211] docker network inspect addons-368879 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1126 19:35:10.340223   15626 network_create.go:284] running [docker network inspect addons-368879] to gather additional debugging logs...
	I1126 19:35:10.340239   15626 cli_runner.go:164] Run: docker network inspect addons-368879
	W1126 19:35:10.355499   15626 cli_runner.go:211] docker network inspect addons-368879 returned with exit code 1
	I1126 19:35:10.355526   15626 network_create.go:287] error running [docker network inspect addons-368879]: docker network inspect addons-368879: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-368879 not found
	I1126 19:35:10.355540   15626 network_create.go:289] output of [docker network inspect addons-368879]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-368879 not found
	
	** /stderr **
	I1126 19:35:10.355629   15626 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1126 19:35:10.370616   15626 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d818d0}
	I1126 19:35:10.370662   15626 network_create.go:124] attempt to create docker network addons-368879 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1126 19:35:10.370702   15626 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-368879 addons-368879
	I1126 19:35:10.413482   15626 network_create.go:108] docker network addons-368879 192.168.49.0/24 created
	I1126 19:35:10.413518   15626 kic.go:121] calculated static IP "192.168.49.2" for the "addons-368879" container
	I1126 19:35:10.413582   15626 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1126 19:35:10.428058   15626 cli_runner.go:164] Run: docker volume create addons-368879 --label name.minikube.sigs.k8s.io=addons-368879 --label created_by.minikube.sigs.k8s.io=true
	I1126 19:35:10.442836   15626 oci.go:103] Successfully created a docker volume addons-368879
	I1126 19:35:10.442944   15626 cli_runner.go:164] Run: docker run --rm --name addons-368879-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-368879 --entrypoint /usr/bin/test -v addons-368879:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1126 19:35:16.843863   15626 cli_runner.go:217] Completed: docker run --rm --name addons-368879-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-368879 --entrypoint /usr/bin/test -v addons-368879:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib: (6.400878368s)
	I1126 19:35:16.843902   15626 oci.go:107] Successfully prepared a docker volume addons-368879
	I1126 19:35:16.843972   15626 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 19:35:16.843994   15626 kic.go:194] Starting extracting preloaded images to volume ...
	I1126 19:35:16.844045   15626 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21974-10722/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-368879:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir
	I1126 19:35:21.138174   15626 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21974-10722/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-368879:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir: (4.294095631s)
	I1126 19:35:21.138200   15626 kic.go:203] duration metric: took 4.294212367s to extract preloaded images to volume ...
	W1126 19:35:21.138283   15626 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1126 19:35:21.138311   15626 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1126 19:35:21.138358   15626 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1126 19:35:21.192733   15626 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-368879 --name addons-368879 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-368879 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-368879 --network addons-368879 --ip 192.168.49.2 --volume addons-368879:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1126 19:35:21.488347   15626 cli_runner.go:164] Run: docker container inspect addons-368879 --format={{.State.Running}}
	I1126 19:35:21.506665   15626 cli_runner.go:164] Run: docker container inspect addons-368879 --format={{.State.Status}}
	I1126 19:35:21.524524   15626 cli_runner.go:164] Run: docker exec addons-368879 stat /var/lib/dpkg/alternatives/iptables
	I1126 19:35:21.568584   15626 oci.go:144] the created container "addons-368879" has a running status.
	I1126 19:35:21.568611   15626 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21974-10722/.minikube/machines/addons-368879/id_rsa...
	I1126 19:35:21.584666   15626 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21974-10722/.minikube/machines/addons-368879/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1126 19:35:21.609405   15626 cli_runner.go:164] Run: docker container inspect addons-368879 --format={{.State.Status}}
	I1126 19:35:21.626611   15626 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1126 19:35:21.626631   15626 kic_runner.go:114] Args: [docker exec --privileged addons-368879 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1126 19:35:21.672761   15626 cli_runner.go:164] Run: docker container inspect addons-368879 --format={{.State.Status}}
	I1126 19:35:21.692040   15626 machine.go:94] provisionDockerMachine start ...
	I1126 19:35:21.692138   15626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-368879
	I1126 19:35:21.713531   15626 main.go:143] libmachine: Using SSH client type: native
	I1126 19:35:21.713781   15626 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1126 19:35:21.713796   15626 main.go:143] libmachine: About to run SSH command:
	hostname
	I1126 19:35:21.715014   15626 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33214->127.0.0.1:32768: read: connection reset by peer
	I1126 19:35:24.850665   15626 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-368879
	
	I1126 19:35:24.850694   15626 ubuntu.go:182] provisioning hostname "addons-368879"
	I1126 19:35:24.850751   15626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-368879
	I1126 19:35:24.867845   15626 main.go:143] libmachine: Using SSH client type: native
	I1126 19:35:24.868054   15626 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1126 19:35:24.868066   15626 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-368879 && echo "addons-368879" | sudo tee /etc/hostname
	I1126 19:35:25.009358   15626 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-368879
	
	I1126 19:35:25.009441   15626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-368879
	I1126 19:35:25.027454   15626 main.go:143] libmachine: Using SSH client type: native
	I1126 19:35:25.027658   15626 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1126 19:35:25.027675   15626 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-368879' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-368879/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-368879' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1126 19:35:25.161204   15626 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1126 19:35:25.161227   15626 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21974-10722/.minikube CaCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21974-10722/.minikube}
	I1126 19:35:25.161255   15626 ubuntu.go:190] setting up certificates
	I1126 19:35:25.161266   15626 provision.go:84] configureAuth start
	I1126 19:35:25.161323   15626 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-368879
	I1126 19:35:25.177653   15626 provision.go:143] copyHostCerts
	I1126 19:35:25.177718   15626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/ca.pem (1078 bytes)
	I1126 19:35:25.177841   15626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/cert.pem (1123 bytes)
	I1126 19:35:25.177912   15626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/key.pem (1675 bytes)
	I1126 19:35:25.177963   15626 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem org=jenkins.addons-368879 san=[127.0.0.1 192.168.49.2 addons-368879 localhost minikube]
	I1126 19:35:25.201322   15626 provision.go:177] copyRemoteCerts
	I1126 19:35:25.201367   15626 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1126 19:35:25.201399   15626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-368879
	I1126 19:35:25.217022   15626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/addons-368879/id_rsa Username:docker}
	I1126 19:35:25.312375   15626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1126 19:35:25.329560   15626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1126 19:35:25.344739   15626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1126 19:35:25.360149   15626 provision.go:87] duration metric: took 198.873025ms to configureAuth
	I1126 19:35:25.360169   15626 ubuntu.go:206] setting minikube options for container-runtime
	I1126 19:35:25.360320   15626 config.go:182] Loaded profile config "addons-368879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:35:25.360415   15626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-368879
	I1126 19:35:25.376890   15626 main.go:143] libmachine: Using SSH client type: native
	I1126 19:35:25.377089   15626 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1126 19:35:25.377105   15626 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1126 19:35:25.645667   15626 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1126 19:35:25.645695   15626 machine.go:97] duration metric: took 3.953625498s to provisionDockerMachine
	I1126 19:35:25.645707   15626 client.go:176] duration metric: took 15.487604821s to LocalClient.Create
	I1126 19:35:25.645728   15626 start.go:167] duration metric: took 15.487672535s to libmachine.API.Create "addons-368879"
	I1126 19:35:25.645737   15626 start.go:293] postStartSetup for "addons-368879" (driver="docker")
	I1126 19:35:25.645752   15626 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1126 19:35:25.645823   15626 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1126 19:35:25.645868   15626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-368879
	I1126 19:35:25.663631   15626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/addons-368879/id_rsa Username:docker}
	I1126 19:35:25.761398   15626 ssh_runner.go:195] Run: cat /etc/os-release
	I1126 19:35:25.764411   15626 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1126 19:35:25.764442   15626 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1126 19:35:25.764453   15626 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-10722/.minikube/addons for local assets ...
	I1126 19:35:25.764517   15626 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-10722/.minikube/files for local assets ...
	I1126 19:35:25.764549   15626 start.go:296] duration metric: took 118.804834ms for postStartSetup
	I1126 19:35:25.764818   15626 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-368879
	I1126 19:35:25.781036   15626 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/config.json ...
	I1126 19:35:25.781283   15626 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1126 19:35:25.781329   15626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-368879
	I1126 19:35:25.796877   15626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/addons-368879/id_rsa Username:docker}
	I1126 19:35:25.889609   15626 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1126 19:35:25.893559   15626 start.go:128] duration metric: took 15.737106954s to createHost
	I1126 19:35:25.893578   15626 start.go:83] releasing machines lock for "addons-368879", held for 15.737227352s
	I1126 19:35:25.893626   15626 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-368879
	I1126 19:35:25.909889   15626 ssh_runner.go:195] Run: cat /version.json
	I1126 19:35:25.909934   15626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-368879
	I1126 19:35:25.909990   15626 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1126 19:35:25.910065   15626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-368879
	I1126 19:35:25.927286   15626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/addons-368879/id_rsa Username:docker}
	I1126 19:35:25.927779   15626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/addons-368879/id_rsa Username:docker}
	I1126 19:35:26.019734   15626 ssh_runner.go:195] Run: systemctl --version
	I1126 19:35:26.092394   15626 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1126 19:35:26.124073   15626 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1126 19:35:26.128247   15626 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1126 19:35:26.128313   15626 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1126 19:35:26.151250   15626 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1126 19:35:26.151268   15626 start.go:496] detecting cgroup driver to use...
	I1126 19:35:26.151292   15626 detect.go:190] detected "systemd" cgroup driver on host os
	I1126 19:35:26.151322   15626 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1126 19:35:26.165164   15626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1126 19:35:26.175666   15626 docker.go:218] disabling cri-docker service (if available) ...
	I1126 19:35:26.175709   15626 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1126 19:35:26.190094   15626 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1126 19:35:26.205207   15626 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1126 19:35:26.279881   15626 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1126 19:35:26.361506   15626 docker.go:234] disabling docker service ...
	I1126 19:35:26.361556   15626 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1126 19:35:26.378076   15626 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1126 19:35:26.389425   15626 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1126 19:35:26.470171   15626 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1126 19:35:26.547490   15626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1126 19:35:26.558374   15626 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1126 19:35:26.571231   15626 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1126 19:35:26.571301   15626 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 19:35:26.580474   15626 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1126 19:35:26.580543   15626 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 19:35:26.588195   15626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 19:35:26.595858   15626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 19:35:26.603584   15626 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1126 19:35:26.610601   15626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 19:35:26.618164   15626 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 19:35:26.630037   15626 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 19:35:26.637709   15626 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1126 19:35:26.644146   15626 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1126 19:35:26.644191   15626 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1126 19:35:26.654993   15626 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1126 19:35:26.661547   15626 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 19:35:26.736695   15626 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1126 19:35:26.864227   15626 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1126 19:35:26.864298   15626 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1126 19:35:26.867802   15626 start.go:564] Will wait 60s for crictl version
	I1126 19:35:26.867849   15626 ssh_runner.go:195] Run: which crictl
	I1126 19:35:26.871019   15626 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1126 19:35:26.893360   15626 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1126 19:35:26.893451   15626 ssh_runner.go:195] Run: crio --version
	I1126 19:35:26.918429   15626 ssh_runner.go:195] Run: crio --version
	I1126 19:35:26.944661   15626 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1126 19:35:26.945711   15626 cli_runner.go:164] Run: docker network inspect addons-368879 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1126 19:35:26.961960   15626 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1126 19:35:26.965528   15626 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 19:35:26.974923   15626 kubeadm.go:884] updating cluster {Name:addons-368879 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-368879 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1126 19:35:26.975021   15626 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 19:35:26.975063   15626 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 19:35:27.004349   15626 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 19:35:27.004368   15626 crio.go:433] Images already preloaded, skipping extraction
	I1126 19:35:27.004414   15626 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 19:35:27.027312   15626 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 19:35:27.027331   15626 cache_images.go:86] Images are preloaded, skipping loading
	I1126 19:35:27.027338   15626 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1126 19:35:27.027433   15626 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-368879 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-368879 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1126 19:35:27.027514   15626 ssh_runner.go:195] Run: crio config
	I1126 19:35:27.068260   15626 cni.go:84] Creating CNI manager for ""
	I1126 19:35:27.068283   15626 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 19:35:27.068300   15626 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1126 19:35:27.068319   15626 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-368879 NodeName:addons-368879 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1126 19:35:27.068452   15626 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-368879"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1126 19:35:27.068530   15626 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1126 19:35:27.075884   15626 binaries.go:51] Found k8s binaries, skipping transfer
	I1126 19:35:27.075938   15626 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1126 19:35:27.082894   15626 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1126 19:35:27.094452   15626 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1126 19:35:27.108321   15626 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1126 19:35:27.119434   15626 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1126 19:35:27.122497   15626 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 19:35:27.131097   15626 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 19:35:27.205743   15626 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 19:35:27.226309   15626 certs.go:69] Setting up /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879 for IP: 192.168.49.2
	I1126 19:35:27.226329   15626 certs.go:195] generating shared ca certs ...
	I1126 19:35:27.226347   15626 certs.go:227] acquiring lock for ca certs: {Name:mk4795cc985d88cb969cdd6a9c35d3c72f02dfea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:35:27.226480   15626 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21974-10722/.minikube/ca.key
	I1126 19:35:27.266098   15626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-10722/.minikube/ca.crt ...
	I1126 19:35:27.266120   15626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/ca.crt: {Name:mk08fe333e2718aa9edd591caefe2790eeb5ee03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:35:27.266282   15626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-10722/.minikube/ca.key ...
	I1126 19:35:27.266296   15626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/ca.key: {Name:mka51114cd9cf1bef98339a3911048402c34d92a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:35:27.266397   15626 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.key
	I1126 19:35:27.367200   15626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.crt ...
	I1126 19:35:27.367221   15626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.crt: {Name:mk9667acd9406cd8f55b4e5d2ce62084c1571746 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:35:27.367379   15626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.key ...
	I1126 19:35:27.367396   15626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.key: {Name:mk31ed3fba07b16735240e6c762ea28b2931504e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:35:27.367508   15626 certs.go:257] generating profile certs ...
	I1126 19:35:27.367563   15626 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/client.key
	I1126 19:35:27.367576   15626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/client.crt with IP's: []
	I1126 19:35:27.445350   15626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/client.crt ...
	I1126 19:35:27.445369   15626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/client.crt: {Name:mk13337429698fea7d30e4adeecfa0bf36f32c90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:35:27.445523   15626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/client.key ...
	I1126 19:35:27.445537   15626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/client.key: {Name:mk5c06ab1f23d6acc5f1b73e1dd4952a8de6d5ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:35:27.445637   15626 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/apiserver.key.743572d0
	I1126 19:35:27.445656   15626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/apiserver.crt.743572d0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1126 19:35:27.592468   15626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/apiserver.crt.743572d0 ...
	I1126 19:35:27.592489   15626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/apiserver.crt.743572d0: {Name:mk7d32c35019d4cd63bfbdcd4906e3c002cfa51d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:35:27.592641   15626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/apiserver.key.743572d0 ...
	I1126 19:35:27.592657   15626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/apiserver.key.743572d0: {Name:mkd98dad1bb4c177a838df62609be2b8b55f5481 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:35:27.592753   15626 certs.go:382] copying /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/apiserver.crt.743572d0 -> /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/apiserver.crt
	I1126 19:35:27.592830   15626 certs.go:386] copying /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/apiserver.key.743572d0 -> /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/apiserver.key
	I1126 19:35:27.592878   15626 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/proxy-client.key
	I1126 19:35:27.592894   15626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/proxy-client.crt with IP's: []
	I1126 19:35:27.722491   15626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/proxy-client.crt ...
	I1126 19:35:27.722510   15626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/proxy-client.crt: {Name:mkdb50128ffcd4eb9744e0b6126b238e19b333f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:35:27.722651   15626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/proxy-client.key ...
	I1126 19:35:27.722664   15626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/proxy-client.key: {Name:mk0567f2bc89b129782af6e1ddd0b88433338274 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:35:27.722886   15626 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem (1679 bytes)
	I1126 19:35:27.722924   15626 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem (1078 bytes)
	I1126 19:35:27.722950   15626 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem (1123 bytes)
	I1126 19:35:27.722973   15626 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem (1675 bytes)
	I1126 19:35:27.723535   15626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1126 19:35:27.740362   15626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1126 19:35:27.756213   15626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1126 19:35:27.771714   15626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1126 19:35:27.787072   15626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1126 19:35:27.802591   15626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1126 19:35:27.818067   15626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1126 19:35:27.833188   15626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1126 19:35:27.848496   15626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1126 19:35:27.865570   15626 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1126 19:35:27.876587   15626 ssh_runner.go:195] Run: openssl version
	I1126 19:35:27.881989   15626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1126 19:35:27.891300   15626 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1126 19:35:27.894489   15626 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 26 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1126 19:35:27.894530   15626 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1126 19:35:27.927259   15626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1126 19:35:27.935372   15626 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1126 19:35:27.938751   15626 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1126 19:35:27.938802   15626 kubeadm.go:401] StartCluster: {Name:addons-368879 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-368879 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 19:35:27.938876   15626 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 19:35:27.938911   15626 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 19:35:27.966560   15626 cri.go:89] found id: ""
	I1126 19:35:27.966621   15626 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1126 19:35:27.973848   15626 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1126 19:35:27.980834   15626 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1126 19:35:27.980880   15626 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1126 19:35:27.987536   15626 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1126 19:35:27.987551   15626 kubeadm.go:158] found existing configuration files:
	
	I1126 19:35:27.987584   15626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1126 19:35:27.994408   15626 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1126 19:35:27.994440   15626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1126 19:35:28.000807   15626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1126 19:35:28.007486   15626 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1126 19:35:28.007529   15626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1126 19:35:28.013850   15626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1126 19:35:28.020601   15626 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1126 19:35:28.020631   15626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1126 19:35:28.026832   15626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1126 19:35:28.033413   15626 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1126 19:35:28.033445   15626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1126 19:35:28.039815   15626 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1126 19:35:28.091882   15626 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1126 19:35:28.143294   15626 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1126 19:35:36.677875   15626 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1126 19:35:36.677952   15626 kubeadm.go:319] [preflight] Running pre-flight checks
	I1126 19:35:36.678067   15626 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1126 19:35:36.678137   15626 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1126 19:35:36.678186   15626 kubeadm.go:319] OS: Linux
	I1126 19:35:36.678236   15626 kubeadm.go:319] CGROUPS_CPU: enabled
	I1126 19:35:36.678282   15626 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1126 19:35:36.678326   15626 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1126 19:35:36.678372   15626 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1126 19:35:36.678414   15626 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1126 19:35:36.678481   15626 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1126 19:35:36.678525   15626 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1126 19:35:36.678572   15626 kubeadm.go:319] CGROUPS_IO: enabled
	I1126 19:35:36.678635   15626 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1126 19:35:36.678769   15626 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1126 19:35:36.678900   15626 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1126 19:35:36.678975   15626 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1126 19:35:36.680434   15626 out.go:252]   - Generating certificates and keys ...
	I1126 19:35:36.680526   15626 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1126 19:35:36.680599   15626 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1126 19:35:36.680675   15626 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1126 19:35:36.680740   15626 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1126 19:35:36.680820   15626 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1126 19:35:36.680880   15626 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1126 19:35:36.680932   15626 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1126 19:35:36.681034   15626 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-368879 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1126 19:35:36.681085   15626 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1126 19:35:36.681192   15626 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-368879 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1126 19:35:36.681248   15626 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1126 19:35:36.681303   15626 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1126 19:35:36.681346   15626 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1126 19:35:36.681400   15626 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1126 19:35:36.681451   15626 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1126 19:35:36.681547   15626 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1126 19:35:36.681627   15626 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1126 19:35:36.681752   15626 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1126 19:35:36.681810   15626 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1126 19:35:36.681909   15626 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1126 19:35:36.681973   15626 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1126 19:35:36.683164   15626 out.go:252]   - Booting up control plane ...
	I1126 19:35:36.683231   15626 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1126 19:35:36.683313   15626 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1126 19:35:36.683375   15626 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1126 19:35:36.683485   15626 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1126 19:35:36.683597   15626 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1126 19:35:36.683703   15626 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1126 19:35:36.683773   15626 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1126 19:35:36.683808   15626 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1126 19:35:36.683913   15626 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1126 19:35:36.684013   15626 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1126 19:35:36.684072   15626 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001561636s
	I1126 19:35:36.684153   15626 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1126 19:35:36.684230   15626 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1126 19:35:36.684311   15626 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1126 19:35:36.684380   15626 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1126 19:35:36.684439   15626 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.101321794s
	I1126 19:35:36.684512   15626 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.848795955s
	I1126 19:35:36.684576   15626 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501880389s
	I1126 19:35:36.684666   15626 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1126 19:35:36.684786   15626 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1126 19:35:36.684870   15626 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1126 19:35:36.685062   15626 kubeadm.go:319] [mark-control-plane] Marking the node addons-368879 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1126 19:35:36.685154   15626 kubeadm.go:319] [bootstrap-token] Using token: ooclz9.4sx22jlmjqnuuxe0
	I1126 19:35:36.686446   15626 out.go:252]   - Configuring RBAC rules ...
	I1126 19:35:36.686584   15626 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1126 19:35:36.686686   15626 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1126 19:35:36.686826   15626 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1126 19:35:36.686946   15626 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1126 19:35:36.687088   15626 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1126 19:35:36.687184   15626 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1126 19:35:36.687315   15626 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1126 19:35:36.687377   15626 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1126 19:35:36.687450   15626 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1126 19:35:36.687473   15626 kubeadm.go:319] 
	I1126 19:35:36.687552   15626 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1126 19:35:36.687561   15626 kubeadm.go:319] 
	I1126 19:35:36.687672   15626 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1126 19:35:36.687681   15626 kubeadm.go:319] 
	I1126 19:35:36.687723   15626 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1126 19:35:36.687800   15626 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1126 19:35:36.687844   15626 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1126 19:35:36.687849   15626 kubeadm.go:319] 
	I1126 19:35:36.687922   15626 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1126 19:35:36.687931   15626 kubeadm.go:319] 
	I1126 19:35:36.687985   15626 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1126 19:35:36.687992   15626 kubeadm.go:319] 
	I1126 19:35:36.688036   15626 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1126 19:35:36.688095   15626 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1126 19:35:36.688185   15626 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1126 19:35:36.688195   15626 kubeadm.go:319] 
	I1126 19:35:36.688317   15626 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1126 19:35:36.688419   15626 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1126 19:35:36.688426   15626 kubeadm.go:319] 
	I1126 19:35:36.688556   15626 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token ooclz9.4sx22jlmjqnuuxe0 \
	I1126 19:35:36.688668   15626 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:9cd90c79a54686c211eb288f3e905de978807b759c474caa965189d211d6551b \
	I1126 19:35:36.688692   15626 kubeadm.go:319] 	--control-plane 
	I1126 19:35:36.688701   15626 kubeadm.go:319] 
	I1126 19:35:36.688776   15626 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1126 19:35:36.688782   15626 kubeadm.go:319] 
	I1126 19:35:36.688857   15626 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token ooclz9.4sx22jlmjqnuuxe0 \
	I1126 19:35:36.688952   15626 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:9cd90c79a54686c211eb288f3e905de978807b759c474caa965189d211d6551b 
	I1126 19:35:36.688962   15626 cni.go:84] Creating CNI manager for ""
	I1126 19:35:36.688967   15626 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 19:35:36.690165   15626 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1126 19:35:36.691257   15626 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1126 19:35:36.695142   15626 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1126 19:35:36.695159   15626 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1126 19:35:36.707382   15626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1126 19:35:36.893563   15626 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1126 19:35:36.893660   15626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 19:35:36.893726   15626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-368879 minikube.k8s.io/updated_at=2025_11_26T19_35_36_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970 minikube.k8s.io/name=addons-368879 minikube.k8s.io/primary=true
	I1126 19:35:36.904375   15626 ops.go:34] apiserver oom_adj: -16
	I1126 19:35:36.963693   15626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 19:35:37.463734   15626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 19:35:37.964432   15626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 19:35:38.464280   15626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 19:35:38.964662   15626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 19:35:39.464696   15626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 19:35:39.964646   15626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 19:35:40.463850   15626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 19:35:40.964412   15626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 19:35:41.464370   15626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 19:35:41.524765   15626 kubeadm.go:1114] duration metric: took 4.631161589s to wait for elevateKubeSystemPrivileges
	I1126 19:35:41.524806   15626 kubeadm.go:403] duration metric: took 13.586008107s to StartCluster
	I1126 19:35:41.524826   15626 settings.go:142] acquiring lock: {Name:mkeab3f1dbcfb88a16fda32bff13c520a9a811ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:35:41.524926   15626 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21974-10722/kubeconfig
	I1126 19:35:41.525289   15626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/kubeconfig: {Name:mk6fc897a3190afd853d8f239c80394df10dbd39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:35:41.525487   15626 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1126 19:35:41.525500   15626 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1126 19:35:41.525551   15626 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1126 19:35:41.525689   15626 config.go:182] Loaded profile config "addons-368879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:35:41.525701   15626 addons.go:70] Setting default-storageclass=true in profile "addons-368879"
	I1126 19:35:41.525717   15626 addons.go:70] Setting metrics-server=true in profile "addons-368879"
	I1126 19:35:41.525724   15626 addons.go:70] Setting cloud-spanner=true in profile "addons-368879"
	I1126 19:35:41.525742   15626 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-368879"
	I1126 19:35:41.525745   15626 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-368879"
	I1126 19:35:41.525751   15626 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-368879"
	I1126 19:35:41.525730   15626 addons.go:70] Setting inspektor-gadget=true in profile "addons-368879"
	I1126 19:35:41.525769   15626 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-368879"
	I1126 19:35:41.525778   15626 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-368879"
	I1126 19:35:41.525780   15626 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-368879"
	I1126 19:35:41.525784   15626 addons.go:70] Setting gcp-auth=true in profile "addons-368879"
	I1126 19:35:41.525798   15626 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-368879"
	I1126 19:35:41.525802   15626 mustload.go:66] Loading cluster: addons-368879
	I1126 19:35:41.525780   15626 addons.go:70] Setting storage-provisioner=true in profile "addons-368879"
	I1126 19:35:41.525832   15626 host.go:66] Checking if "addons-368879" exists ...
	I1126 19:35:41.525838   15626 addons.go:239] Setting addon storage-provisioner=true in "addons-368879"
	I1126 19:35:41.525886   15626 host.go:66] Checking if "addons-368879" exists ...
	I1126 19:35:41.525701   15626 addons.go:70] Setting yakd=true in profile "addons-368879"
	I1126 19:35:41.525905   15626 addons.go:239] Setting addon yakd=true in "addons-368879"
	I1126 19:35:41.525923   15626 host.go:66] Checking if "addons-368879" exists ...
	I1126 19:35:41.525974   15626 config.go:182] Loaded profile config "addons-368879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:35:41.526069   15626 addons.go:70] Setting ingress=true in profile "addons-368879"
	I1126 19:35:41.526083   15626 addons.go:239] Setting addon ingress=true in "addons-368879"
	I1126 19:35:41.526109   15626 host.go:66] Checking if "addons-368879" exists ...
	I1126 19:35:41.526200   15626 cli_runner.go:164] Run: docker container inspect addons-368879 --format={{.State.Status}}
	I1126 19:35:41.526214   15626 addons.go:70] Setting volcano=true in profile "addons-368879"
	I1126 19:35:41.526229   15626 addons.go:239] Setting addon volcano=true in "addons-368879"
	I1126 19:35:41.526229   15626 addons.go:70] Setting ingress-dns=true in profile "addons-368879"
	I1126 19:35:41.526242   15626 addons.go:239] Setting addon ingress-dns=true in "addons-368879"
	I1126 19:35:41.526251   15626 host.go:66] Checking if "addons-368879" exists ...
	I1126 19:35:41.526275   15626 host.go:66] Checking if "addons-368879" exists ...
	I1126 19:35:41.526346   15626 cli_runner.go:164] Run: docker container inspect addons-368879 --format={{.State.Status}}
	I1126 19:35:41.526379   15626 cli_runner.go:164] Run: docker container inspect addons-368879 --format={{.State.Status}}
	I1126 19:35:41.526386   15626 cli_runner.go:164] Run: docker container inspect addons-368879 --format={{.State.Status}}
	I1126 19:35:41.526560   15626 cli_runner.go:164] Run: docker container inspect addons-368879 --format={{.State.Status}}
	I1126 19:35:41.526718   15626 cli_runner.go:164] Run: docker container inspect addons-368879 --format={{.State.Status}}
	I1126 19:35:41.526879   15626 addons.go:70] Setting volumesnapshots=true in profile "addons-368879"
	I1126 19:35:41.526903   15626 addons.go:239] Setting addon volumesnapshots=true in "addons-368879"
	I1126 19:35:41.526932   15626 host.go:66] Checking if "addons-368879" exists ...
	I1126 19:35:41.526824   15626 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-368879"
	I1126 19:35:41.526956   15626 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-368879"
	I1126 19:35:41.526987   15626 host.go:66] Checking if "addons-368879" exists ...
	I1126 19:35:41.525774   15626 addons.go:239] Setting addon inspektor-gadget=true in "addons-368879"
	I1126 19:35:41.526219   15626 cli_runner.go:164] Run: docker container inspect addons-368879 --format={{.State.Status}}
	I1126 19:35:41.527366   15626 cli_runner.go:164] Run: docker container inspect addons-368879 --format={{.State.Status}}
	I1126 19:35:41.527677   15626 host.go:66] Checking if "addons-368879" exists ...
	I1126 19:35:41.526760   15626 cli_runner.go:164] Run: docker container inspect addons-368879 --format={{.State.Status}}
	I1126 19:35:41.528197   15626 cli_runner.go:164] Run: docker container inspect addons-368879 --format={{.State.Status}}
	I1126 19:35:41.525743   15626 addons.go:239] Setting addon metrics-server=true in "addons-368879"
	I1126 19:35:41.528281   15626 host.go:66] Checking if "addons-368879" exists ...
	I1126 19:35:41.528778   15626 cli_runner.go:164] Run: docker container inspect addons-368879 --format={{.State.Status}}
	I1126 19:35:41.525772   15626 addons.go:239] Setting addon cloud-spanner=true in "addons-368879"
	I1126 19:35:41.529087   15626 host.go:66] Checking if "addons-368879" exists ...
	I1126 19:35:41.529611   15626 addons.go:70] Setting registry-creds=true in profile "addons-368879"
	I1126 19:35:41.529649   15626 addons.go:239] Setting addon registry-creds=true in "addons-368879"
	I1126 19:35:41.529684   15626 host.go:66] Checking if "addons-368879" exists ...
	I1126 19:35:41.529967   15626 cli_runner.go:164] Run: docker container inspect addons-368879 --format={{.State.Status}}
	I1126 19:35:41.530176   15626 cli_runner.go:164] Run: docker container inspect addons-368879 --format={{.State.Status}}
	I1126 19:35:41.531564   15626 out.go:179] * Verifying Kubernetes components...
	I1126 19:35:41.526811   15626 addons.go:70] Setting registry=true in profile "addons-368879"
	I1126 19:35:41.531622   15626 addons.go:239] Setting addon registry=true in "addons-368879"
	I1126 19:35:41.531649   15626 host.go:66] Checking if "addons-368879" exists ...
	I1126 19:35:41.532096   15626 cli_runner.go:164] Run: docker container inspect addons-368879 --format={{.State.Status}}
	I1126 19:35:41.526204   15626 cli_runner.go:164] Run: docker container inspect addons-368879 --format={{.State.Status}}
	I1126 19:35:41.532869   15626 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 19:35:41.525805   15626 host.go:66] Checking if "addons-368879" exists ...
	I1126 19:35:41.536791   15626 cli_runner.go:164] Run: docker container inspect addons-368879 --format={{.State.Status}}
	I1126 19:35:41.537776   15626 cli_runner.go:164] Run: docker container inspect addons-368879 --format={{.State.Status}}
	I1126 19:35:41.575857   15626 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1126 19:35:41.577017   15626 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1126 19:35:41.577045   15626 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1126 19:35:41.577107   15626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-368879
	I1126 19:35:41.586076   15626 addons.go:239] Setting addon default-storageclass=true in "addons-368879"
	I1126 19:35:41.586137   15626 host.go:66] Checking if "addons-368879" exists ...
	I1126 19:35:41.589112   15626 cli_runner.go:164] Run: docker container inspect addons-368879 --format={{.State.Status}}
	I1126 19:35:41.601221   15626 host.go:66] Checking if "addons-368879" exists ...
	I1126 19:35:41.602929   15626 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1126 19:35:41.603321   15626 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1126 19:35:41.605039   15626 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1126 19:35:41.605258   15626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1126 19:35:41.605164   15626 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1126 19:35:41.605872   15626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-368879
	I1126 19:35:41.607835   15626 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1126 19:35:41.609251   15626 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1126 19:35:41.609268   15626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1126 19:35:41.609316   15626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-368879
	W1126 19:35:41.611251   15626 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1126 19:35:41.618546   15626 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1126 19:35:41.622004   15626 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 19:35:41.622025   15626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1126 19:35:41.622095   15626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-368879
	I1126 19:35:41.629300   15626 out.go:179]   - Using image docker.io/registry:3.0.0
	I1126 19:35:41.629314   15626 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1126 19:35:41.630589   15626 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1126 19:35:41.630728   15626 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1126 19:35:41.630744   15626 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1126 19:35:41.631368   15626 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1126 19:35:41.631708   15626 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1126 19:35:41.632567   15626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1126 19:35:41.632305   15626 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1126 19:35:41.632723   15626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-368879
	I1126 19:35:41.633678   15626 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1126 19:35:41.636994   15626 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1126 19:35:41.637025   15626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1126 19:35:41.637078   15626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-368879
	I1126 19:35:41.638840   15626 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1126 19:35:41.638854   15626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1126 19:35:41.638911   15626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-368879
	I1126 19:35:41.639145   15626 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1126 19:35:41.640513   15626 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1126 19:35:41.640538   15626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1126 19:35:41.640597   15626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-368879
	I1126 19:35:41.640662   15626 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1126 19:35:41.640876   15626 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1126 19:35:41.640891   15626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1126 19:35:41.640957   15626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-368879
	I1126 19:35:41.640676   15626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1126 19:35:41.641623   15626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-368879
	I1126 19:35:41.642555   15626 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1126 19:35:41.643692   15626 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1126 19:35:41.643707   15626 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1126 19:35:41.643751   15626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-368879
	I1126 19:35:41.644365   15626 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-368879"
	I1126 19:35:41.644499   15626 host.go:66] Checking if "addons-368879" exists ...
	I1126 19:35:41.645213   15626 cli_runner.go:164] Run: docker container inspect addons-368879 --format={{.State.Status}}
	I1126 19:35:41.648590   15626 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1126 19:35:41.650018   15626 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1126 19:35:41.651618   15626 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1126 19:35:41.651671   15626 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1126 19:35:41.653608   15626 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1126 19:35:41.653626   15626 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1126 19:35:41.653688   15626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-368879
	I1126 19:35:41.653975   15626 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1126 19:35:41.655339   15626 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1126 19:35:41.656673   15626 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1126 19:35:41.657696   15626 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1126 19:35:41.657757   15626 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1126 19:35:41.657865   15626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-368879
	I1126 19:35:41.673057   15626 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1126 19:35:41.673087   15626 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1126 19:35:41.673143   15626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-368879
	I1126 19:35:41.675538   15626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/addons-368879/id_rsa Username:docker}
	I1126 19:35:41.680845   15626 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1126 19:35:41.682666   15626 out.go:179]   - Using image docker.io/busybox:stable
	I1126 19:35:41.684594   15626 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1126 19:35:41.684614   15626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1126 19:35:41.684693   15626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-368879
	I1126 19:35:41.689931   15626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/addons-368879/id_rsa Username:docker}
	I1126 19:35:41.690120   15626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/addons-368879/id_rsa Username:docker}
	I1126 19:35:41.696180   15626 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1126 19:35:41.702159   15626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/addons-368879/id_rsa Username:docker}
	I1126 19:35:41.705226   15626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/addons-368879/id_rsa Username:docker}
	I1126 19:35:41.707830   15626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/addons-368879/id_rsa Username:docker}
	I1126 19:35:41.711616   15626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/addons-368879/id_rsa Username:docker}
	I1126 19:35:41.724718   15626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/addons-368879/id_rsa Username:docker}
	I1126 19:35:41.726279   15626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/addons-368879/id_rsa Username:docker}
	I1126 19:35:41.730669   15626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/addons-368879/id_rsa Username:docker}
	I1126 19:35:41.730706   15626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/addons-368879/id_rsa Username:docker}
	I1126 19:35:41.732979   15626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/addons-368879/id_rsa Username:docker}
	I1126 19:35:41.737718   15626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/addons-368879/id_rsa Username:docker}
	I1126 19:35:41.740608   15626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/addons-368879/id_rsa Username:docker}
	I1126 19:35:41.740639   15626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/addons-368879/id_rsa Username:docker}
	I1126 19:35:41.744090   15626 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W1126 19:35:41.745122   15626 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1126 19:35:41.745152   15626 retry.go:31] will retry after 250.837123ms: ssh: handshake failed: EOF
	I1126 19:35:41.859298   15626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1126 19:35:41.863405   15626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1126 19:35:41.880200   15626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1126 19:35:41.880349   15626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1126 19:35:41.882972   15626 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1126 19:35:41.882991   15626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1126 19:35:41.892625   15626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1126 19:35:41.901082   15626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1126 19:35:41.903639   15626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1126 19:35:41.920968   15626 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1126 19:35:41.920994   15626 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1126 19:35:41.922173   15626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 19:35:41.925567   15626 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1126 19:35:41.925586   15626 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1126 19:35:41.927214   15626 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1126 19:35:41.927229   15626 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1126 19:35:41.929728   15626 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1126 19:35:41.929742   15626 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1126 19:35:41.936236   15626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1126 19:35:41.952487   15626 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1126 19:35:41.952594   15626 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1126 19:35:41.960017   15626 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1126 19:35:41.960035   15626 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1126 19:35:41.972750   15626 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1126 19:35:41.972824   15626 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1126 19:35:41.983234   15626 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1126 19:35:41.983312   15626 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1126 19:35:41.985721   15626 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1126 19:35:41.985741   15626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1126 19:35:42.000234   15626 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1126 19:35:42.000283   15626 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1126 19:35:42.003342   15626 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1126 19:35:42.003383   15626 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1126 19:35:42.011058   15626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1126 19:35:42.021359   15626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1126 19:35:42.023885   15626 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1126 19:35:42.023905   15626 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1126 19:35:42.035940   15626 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1126 19:35:42.035965   15626 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1126 19:35:42.040023   15626 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1126 19:35:42.041239   15626 node_ready.go:35] waiting up to 6m0s for node "addons-368879" to be "Ready" ...
	I1126 19:35:42.067391   15626 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1126 19:35:42.067421   15626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1126 19:35:42.123911   15626 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1126 19:35:42.123956   15626 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1126 19:35:42.125038   15626 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1126 19:35:42.125057   15626 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1126 19:35:42.139172   15626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1126 19:35:42.175932   15626 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1126 19:35:42.175950   15626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1126 19:35:42.203436   15626 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1126 19:35:42.203479   15626 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1126 19:35:42.238076   15626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1126 19:35:42.249780   15626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1126 19:35:42.258021   15626 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1126 19:35:42.258105   15626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1126 19:35:42.295828   15626 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1126 19:35:42.295852   15626 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1126 19:35:42.371238   15626 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1126 19:35:42.371332   15626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1126 19:35:42.418156   15626 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1126 19:35:42.418222   15626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1126 19:35:42.449707   15626 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1126 19:35:42.449730   15626 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1126 19:35:42.501117   15626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1126 19:35:42.545578   15626 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-368879" context rescaled to 1 replicas
	W1126 19:35:42.771206   15626 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1126 19:35:43.044818   15626 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.143701384s)
	I1126 19:35:43.044854   15626 addons.go:495] Verifying addon ingress=true in "addons-368879"
	I1126 19:35:43.044874   15626 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.14120073s)
	I1126 19:35:43.044965   15626 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.108711312s)
	I1126 19:35:43.044936   15626 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.122734533s)
	I1126 19:35:43.045051   15626 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.033960058s)
	I1126 19:35:43.045065   15626 addons.go:495] Verifying addon registry=true in "addons-368879"
	I1126 19:35:43.045122   15626 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.023730881s)
	I1126 19:35:43.045139   15626 addons.go:495] Verifying addon metrics-server=true in "addons-368879"
	I1126 19:35:43.048586   15626 out.go:179] * Verifying ingress addon...
	I1126 19:35:43.048589   15626 out.go:179] * Verifying registry addon...
	I1126 19:35:43.049243   15626 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-368879 service yakd-dashboard -n yakd-dashboard
	
	I1126 19:35:43.050779   15626 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1126 19:35:43.051369   15626 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1126 19:35:43.052962   15626 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1126 19:35:43.053115   15626 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1126 19:35:43.053133   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:35:43.506818   15626 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.268702461s)
	W1126 19:35:43.506868   15626 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1126 19:35:43.506891   15626 retry.go:31] will retry after 350.045154ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1126 19:35:43.506904   15626 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (1.257008406s)
	I1126 19:35:43.507128   15626 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.005904803s)
	I1126 19:35:43.507155   15626 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-368879"
	I1126 19:35:43.508894   15626 out.go:179] * Verifying csi-hostpath-driver addon...
	I1126 19:35:43.511110   15626 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1126 19:35:43.513016   15626 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1126 19:35:43.513037   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:35:43.553529   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:35:43.553667   15626 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1126 19:35:43.553681   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:35:43.857191   15626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1126 19:35:44.014631   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1126 19:35:44.043438   15626 node_ready.go:57] node "addons-368879" has "Ready":"False" status (will retry)
	I1126 19:35:44.114795   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:35:44.114909   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:35:44.514411   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:35:44.553166   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:35:44.553295   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:35:45.013445   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:35:45.113802   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:35:45.113934   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:35:45.513774   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:35:45.553156   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:35:45.553315   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:35:46.014176   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:35:46.054390   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:35:46.054582   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:35:46.268954   15626 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.411722541s)
	I1126 19:35:46.514197   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1126 19:35:46.543856   15626 node_ready.go:57] node "addons-368879" has "Ready":"False" status (will retry)
	I1126 19:35:46.552425   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:35:46.553422   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:35:47.014742   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:35:47.115446   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:35:47.115704   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:35:47.514112   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:35:47.552870   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:35:47.553774   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:35:48.014246   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:35:48.114730   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:35:48.114805   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:35:48.514427   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:35:48.553146   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:35:48.553351   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:35:49.013868   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1126 19:35:49.043073   15626 node_ready.go:57] node "addons-368879" has "Ready":"False" status (will retry)
	I1126 19:35:49.114965   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:35:49.115152   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:35:49.227789   15626 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1126 19:35:49.227846   15626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-368879
	I1126 19:35:49.245437   15626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/addons-368879/id_rsa Username:docker}
	I1126 19:35:49.353171   15626 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1126 19:35:49.364868   15626 addons.go:239] Setting addon gcp-auth=true in "addons-368879"
	I1126 19:35:49.364922   15626 host.go:66] Checking if "addons-368879" exists ...
	I1126 19:35:49.365244   15626 cli_runner.go:164] Run: docker container inspect addons-368879 --format={{.State.Status}}
	I1126 19:35:49.381685   15626 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1126 19:35:49.381739   15626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-368879
	I1126 19:35:49.397919   15626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/addons-368879/id_rsa Username:docker}
	I1126 19:35:49.491643   15626 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1126 19:35:49.492895   15626 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1126 19:35:49.493963   15626 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1126 19:35:49.493976   15626 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1126 19:35:49.505819   15626 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1126 19:35:49.505834   15626 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1126 19:35:49.514328   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:35:49.517692   15626 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1126 19:35:49.517705   15626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1126 19:35:49.529807   15626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1126 19:35:49.553557   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:35:49.553727   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:35:49.807417   15626 addons.go:495] Verifying addon gcp-auth=true in "addons-368879"
	I1126 19:35:49.810495   15626 out.go:179] * Verifying gcp-auth addon...
	I1126 19:35:49.812147   15626 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1126 19:35:49.815943   15626 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1126 19:35:49.815966   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:35:50.014318   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:35:50.053119   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:35:50.053926   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:35:50.315264   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:35:50.513726   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:35:50.553349   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:35:50.553443   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:35:50.814802   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:35:51.014286   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1126 19:35:51.044050   15626 node_ready.go:57] node "addons-368879" has "Ready":"False" status (will retry)
	I1126 19:35:51.052842   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:35:51.054024   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:35:51.315372   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:35:51.513950   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:35:51.552949   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:35:51.553870   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:35:51.815499   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:35:52.013581   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:35:52.053266   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:35:52.053499   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:35:52.314785   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:35:52.514331   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:35:52.552804   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:35:52.553628   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:35:52.815246   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:35:53.014033   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:35:53.053712   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:35:53.054351   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:35:53.315244   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:35:53.513606   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1126 19:35:53.543377   15626 node_ready.go:57] node "addons-368879" has "Ready":"False" status (will retry)
	I1126 19:35:53.553293   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:35:53.553409   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:35:53.814720   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:35:54.014199   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:35:54.053020   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:35:54.053868   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:35:54.315278   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:35:54.513582   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:35:54.553270   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:35:54.553270   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:35:54.814393   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:35:55.014208   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:35:55.053360   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:35:55.053600   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:35:55.315102   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:35:55.513645   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1126 19:35:55.543514   15626 node_ready.go:57] node "addons-368879" has "Ready":"False" status (will retry)
	I1126 19:35:55.553145   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:35:55.553360   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:35:55.814530   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:35:56.013819   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:35:56.053532   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:35:56.053687   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:35:56.314873   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:35:56.514501   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:35:56.553011   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:35:56.554005   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:35:56.815508   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:35:57.014055   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:35:57.052901   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:35:57.053837   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:35:57.315231   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:35:57.513572   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:35:57.552947   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:35:57.553148   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:35:57.814396   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:35:58.013768   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1126 19:35:58.043645   15626 node_ready.go:57] node "addons-368879" has "Ready":"False" status (will retry)
	I1126 19:35:58.053395   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:35:58.053528   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:35:58.314948   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:35:58.514381   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:35:58.553077   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:35:58.553156   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:35:58.814530   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:35:59.014276   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:35:59.053312   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:35:59.053479   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:35:59.314755   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:35:59.513947   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:35:59.553494   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:35:59.553609   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:35:59.814721   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:00.014116   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1126 19:36:00.044049   15626 node_ready.go:57] node "addons-368879" has "Ready":"False" status (will retry)
	I1126 19:36:00.053062   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:00.053911   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:00.315095   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:00.513425   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:00.553206   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:00.553395   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:00.814474   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:01.014015   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:01.053482   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:01.053644   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:01.314926   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:01.514368   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:01.552928   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:01.553964   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:01.814411   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:02.013794   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:02.053575   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:02.053754   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:02.315133   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:02.513355   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1126 19:36:02.543140   15626 node_ready.go:57] node "addons-368879" has "Ready":"False" status (will retry)
	I1126 19:36:02.553094   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:02.553933   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:02.814275   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:03.013909   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:03.053442   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:03.053594   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:03.315189   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:03.513293   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:03.553002   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:03.553768   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:03.815004   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:04.013175   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:04.052782   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:04.053693   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:04.315278   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:04.513586   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1126 19:36:04.543323   15626 node_ready.go:57] node "addons-368879" has "Ready":"False" status (will retry)
	I1126 19:36:04.553115   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:04.553165   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:04.814321   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:05.013760   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:05.053493   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:05.053745   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:05.315488   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:05.513815   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:05.553419   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:05.553639   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:05.814790   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:06.014191   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:06.053075   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:06.053944   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:06.315518   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:06.514696   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1126 19:36:06.543535   15626 node_ready.go:57] node "addons-368879" has "Ready":"False" status (will retry)
	I1126 19:36:06.553421   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:06.553450   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:06.814958   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:07.014668   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:07.053535   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:07.053684   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:07.315031   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:07.513550   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:07.553331   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:07.553520   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:07.814708   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:08.014258   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:08.053218   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:08.053408   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:08.314745   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:08.514248   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1126 19:36:08.544085   15626 node_ready.go:57] node "addons-368879" has "Ready":"False" status (will retry)
	I1126 19:36:08.552904   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:08.553951   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:08.815171   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:09.013500   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:09.053112   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:09.053221   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:09.315215   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:09.513432   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:09.553367   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:09.553564   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:09.814866   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:10.014231   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:10.052904   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:10.053775   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:10.315266   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:10.513553   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:10.553038   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:10.553105   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:10.814398   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:11.013706   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1126 19:36:11.043547   15626 node_ready.go:57] node "addons-368879" has "Ready":"False" status (will retry)
	I1126 19:36:11.053313   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:11.053441   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:11.314745   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:11.514168   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:11.552684   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:11.553638   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:11.815253   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:12.013556   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:12.053344   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:12.053400   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:12.314887   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:12.514271   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:12.553032   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:12.553958   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:12.815303   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:13.013726   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1126 19:36:13.043741   15626 node_ready.go:57] node "addons-368879" has "Ready":"False" status (will retry)
	I1126 19:36:13.053557   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:13.053763   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:13.315049   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:13.514237   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:13.552624   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:13.553529   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:13.814651   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:14.013880   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:14.053866   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:14.054037   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:14.315368   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:14.513657   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:14.553448   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:14.553674   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:14.814793   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:15.014420   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:15.053224   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:15.053441   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:15.314984   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:15.514212   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1126 19:36:15.544001   15626 node_ready.go:57] node "addons-368879" has "Ready":"False" status (will retry)
	I1126 19:36:15.552756   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:15.553774   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:15.814964   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:16.014334   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:16.053333   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:16.054186   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:16.314475   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:16.514573   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:16.553364   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:16.554142   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:16.814637   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:17.014033   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:17.053070   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:17.053885   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:17.315298   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:17.513829   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:17.553376   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:17.553574   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:17.814921   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:18.014180   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1126 19:36:18.044058   15626 node_ready.go:57] node "addons-368879" has "Ready":"False" status (will retry)
	I1126 19:36:18.053068   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:18.053873   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:18.315198   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:18.513511   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:18.553565   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:18.553640   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:18.814931   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:19.014207   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:19.052940   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:19.053845   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:19.314397   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:19.513870   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:19.553678   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:19.553872   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:19.814215   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:20.013362   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:20.053137   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:20.053320   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:20.314445   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:20.513892   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1126 19:36:20.543683   15626 node_ready.go:57] node "addons-368879" has "Ready":"False" status (will retry)
	I1126 19:36:20.553474   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:20.553680   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:20.815054   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:21.014281   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:21.053214   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:21.053240   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:21.314429   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:21.513747   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:21.553659   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:21.553716   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:21.815284   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:22.013649   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:22.053383   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:22.053637   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:22.314908   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:22.514386   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:22.552868   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:22.553955   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:22.815452   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:23.014039   15626 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1126 19:36:23.014057   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:23.045484   15626 node_ready.go:49] node "addons-368879" is "Ready"
	I1126 19:36:23.045513   15626 node_ready.go:38] duration metric: took 41.004255143s for node "addons-368879" to be "Ready" ...
	I1126 19:36:23.045528   15626 api_server.go:52] waiting for apiserver process to appear ...
	I1126 19:36:23.045582   15626 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 19:36:23.053115   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:23.053955   15626 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1126 19:36:23.053973   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:23.060946   15626 api_server.go:72] duration metric: took 41.535409883s to wait for apiserver process to appear ...
	I1126 19:36:23.060966   15626 api_server.go:88] waiting for apiserver healthz status ...
	I1126 19:36:23.060987   15626 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1126 19:36:23.064957   15626 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1126 19:36:23.065791   15626 api_server.go:141] control plane version: v1.34.1
	I1126 19:36:23.065824   15626 api_server.go:131] duration metric: took 4.85025ms to wait for apiserver health ...
	I1126 19:36:23.065835   15626 system_pods.go:43] waiting for kube-system pods to appear ...
	I1126 19:36:23.068558   15626 system_pods.go:59] 20 kube-system pods found
	I1126 19:36:23.068584   15626 system_pods.go:61] "amd-gpu-device-plugin-gj5pg" [5041bbf6-8005-42bc-8769-90708d6711d1] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1126 19:36:23.068591   15626 system_pods.go:61] "coredns-66bc5c9577-rv6zq" [fda827e5-458e-47e4-856b-f300a1b580aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 19:36:23.068598   15626 system_pods.go:61] "csi-hostpath-attacher-0" [3ce365e2-f149-4dfe-8c32-92f49c9d6157] Pending
	I1126 19:36:23.068603   15626 system_pods.go:61] "csi-hostpath-resizer-0" [8a26132b-ecf3-46aa-aeea-30a0a1aaf322] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1126 19:36:23.068610   15626 system_pods.go:61] "csi-hostpathplugin-4cdfn" [dd181b66-dc21-4179-9580-ca4f0f403bb9] Pending
	I1126 19:36:23.068614   15626 system_pods.go:61] "etcd-addons-368879" [8e5bd70d-0fd3-492d-b6b2-d2c4e787b008] Running
	I1126 19:36:23.068617   15626 system_pods.go:61] "kindnet-dqhsm" [0e21747b-887f-42ba-946a-ce6d8aaaf19a] Running
	I1126 19:36:23.068620   15626 system_pods.go:61] "kube-apiserver-addons-368879" [688a9861-a405-428e-b035-60d971e9c639] Running
	I1126 19:36:23.068626   15626 system_pods.go:61] "kube-controller-manager-addons-368879" [f6dd02aa-10ac-42fc-b22c-e403a4be5178] Running
	I1126 19:36:23.068632   15626 system_pods.go:61] "kube-ingress-dns-minikube" [78401885-fa6d-4da9-8f6b-90e399e33134] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1126 19:36:23.068637   15626 system_pods.go:61] "kube-proxy-jvtzp" [4315ba8f-bd42-4af5-99ad-333bfd119723] Running
	I1126 19:36:23.068641   15626 system_pods.go:61] "kube-scheduler-addons-368879" [53644990-e6e9-4c0b-948c-60ddae17c928] Running
	I1126 19:36:23.068648   15626 system_pods.go:61] "metrics-server-85b7d694d7-mnzc2" [ef15070b-c3d9-4b20-b2b3-e53708b4a9a6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1126 19:36:23.068651   15626 system_pods.go:61] "nvidia-device-plugin-daemonset-jr6zz" [3a661477-c629-4de7-b047-25a87d532f45] Pending
	I1126 19:36:23.068659   15626 system_pods.go:61] "registry-6b586f9694-4kzdl" [09b5cb7a-fb98-4680-a679-8c1716e1f038] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1126 19:36:23.068663   15626 system_pods.go:61] "registry-creds-764b6fb674-rspjs" [3fc9a944-7747-4626-b4ca-0f2e048703fd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1126 19:36:23.068670   15626 system_pods.go:61] "registry-proxy-lcrcc" [62d90aca-8e64-4089-83fd-bc39957164ba] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1126 19:36:23.068676   15626 system_pods.go:61] "snapshot-controller-7d9fbc56b8-2lfj6" [a495ae59-667b-4e3b-914d-f8f925d11a5d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1126 19:36:23.068684   15626 system_pods.go:61] "snapshot-controller-7d9fbc56b8-4kvzv" [f3968d51-d680-46be-bb2a-7fa01e497530] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1126 19:36:23.068689   15626 system_pods.go:61] "storage-provisioner" [e959d9e7-8cb7-4ca2-963a-7a0c7d416565] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 19:36:23.068697   15626 system_pods.go:74] duration metric: took 2.857163ms to wait for pod list to return data ...
	I1126 19:36:23.068706   15626 default_sa.go:34] waiting for default service account to be created ...
	I1126 19:36:23.070213   15626 default_sa.go:45] found service account: "default"
	I1126 19:36:23.070228   15626 default_sa.go:55] duration metric: took 1.517275ms for default service account to be created ...
	I1126 19:36:23.070235   15626 system_pods.go:116] waiting for k8s-apps to be running ...
	I1126 19:36:23.072883   15626 system_pods.go:86] 20 kube-system pods found
	I1126 19:36:23.072906   15626 system_pods.go:89] "amd-gpu-device-plugin-gj5pg" [5041bbf6-8005-42bc-8769-90708d6711d1] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1126 19:36:23.072912   15626 system_pods.go:89] "coredns-66bc5c9577-rv6zq" [fda827e5-458e-47e4-856b-f300a1b580aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 19:36:23.072919   15626 system_pods.go:89] "csi-hostpath-attacher-0" [3ce365e2-f149-4dfe-8c32-92f49c9d6157] Pending
	I1126 19:36:23.072924   15626 system_pods.go:89] "csi-hostpath-resizer-0" [8a26132b-ecf3-46aa-aeea-30a0a1aaf322] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1126 19:36:23.072928   15626 system_pods.go:89] "csi-hostpathplugin-4cdfn" [dd181b66-dc21-4179-9580-ca4f0f403bb9] Pending
	I1126 19:36:23.072931   15626 system_pods.go:89] "etcd-addons-368879" [8e5bd70d-0fd3-492d-b6b2-d2c4e787b008] Running
	I1126 19:36:23.072935   15626 system_pods.go:89] "kindnet-dqhsm" [0e21747b-887f-42ba-946a-ce6d8aaaf19a] Running
	I1126 19:36:23.072942   15626 system_pods.go:89] "kube-apiserver-addons-368879" [688a9861-a405-428e-b035-60d971e9c639] Running
	I1126 19:36:23.072948   15626 system_pods.go:89] "kube-controller-manager-addons-368879" [f6dd02aa-10ac-42fc-b22c-e403a4be5178] Running
	I1126 19:36:23.072953   15626 system_pods.go:89] "kube-ingress-dns-minikube" [78401885-fa6d-4da9-8f6b-90e399e33134] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1126 19:36:23.072957   15626 system_pods.go:89] "kube-proxy-jvtzp" [4315ba8f-bd42-4af5-99ad-333bfd119723] Running
	I1126 19:36:23.072961   15626 system_pods.go:89] "kube-scheduler-addons-368879" [53644990-e6e9-4c0b-948c-60ddae17c928] Running
	I1126 19:36:23.072966   15626 system_pods.go:89] "metrics-server-85b7d694d7-mnzc2" [ef15070b-c3d9-4b20-b2b3-e53708b4a9a6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1126 19:36:23.072971   15626 system_pods.go:89] "nvidia-device-plugin-daemonset-jr6zz" [3a661477-c629-4de7-b047-25a87d532f45] Pending
	I1126 19:36:23.072976   15626 system_pods.go:89] "registry-6b586f9694-4kzdl" [09b5cb7a-fb98-4680-a679-8c1716e1f038] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1126 19:36:23.072980   15626 system_pods.go:89] "registry-creds-764b6fb674-rspjs" [3fc9a944-7747-4626-b4ca-0f2e048703fd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1126 19:36:23.072984   15626 system_pods.go:89] "registry-proxy-lcrcc" [62d90aca-8e64-4089-83fd-bc39957164ba] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1126 19:36:23.072990   15626 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2lfj6" [a495ae59-667b-4e3b-914d-f8f925d11a5d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1126 19:36:23.072997   15626 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4kvzv" [f3968d51-d680-46be-bb2a-7fa01e497530] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1126 19:36:23.073008   15626 system_pods.go:89] "storage-provisioner" [e959d9e7-8cb7-4ca2-963a-7a0c7d416565] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 19:36:23.073021   15626 retry.go:31] will retry after 225.984763ms: missing components: kube-dns
	I1126 19:36:23.304100   15626 system_pods.go:86] 20 kube-system pods found
	I1126 19:36:23.304134   15626 system_pods.go:89] "amd-gpu-device-plugin-gj5pg" [5041bbf6-8005-42bc-8769-90708d6711d1] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1126 19:36:23.304144   15626 system_pods.go:89] "coredns-66bc5c9577-rv6zq" [fda827e5-458e-47e4-856b-f300a1b580aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 19:36:23.304154   15626 system_pods.go:89] "csi-hostpath-attacher-0" [3ce365e2-f149-4dfe-8c32-92f49c9d6157] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1126 19:36:23.304161   15626 system_pods.go:89] "csi-hostpath-resizer-0" [8a26132b-ecf3-46aa-aeea-30a0a1aaf322] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1126 19:36:23.304170   15626 system_pods.go:89] "csi-hostpathplugin-4cdfn" [dd181b66-dc21-4179-9580-ca4f0f403bb9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1126 19:36:23.304180   15626 system_pods.go:89] "etcd-addons-368879" [8e5bd70d-0fd3-492d-b6b2-d2c4e787b008] Running
	I1126 19:36:23.304188   15626 system_pods.go:89] "kindnet-dqhsm" [0e21747b-887f-42ba-946a-ce6d8aaaf19a] Running
	I1126 19:36:23.304197   15626 system_pods.go:89] "kube-apiserver-addons-368879" [688a9861-a405-428e-b035-60d971e9c639] Running
	I1126 19:36:23.304202   15626 system_pods.go:89] "kube-controller-manager-addons-368879" [f6dd02aa-10ac-42fc-b22c-e403a4be5178] Running
	I1126 19:36:23.304213   15626 system_pods.go:89] "kube-ingress-dns-minikube" [78401885-fa6d-4da9-8f6b-90e399e33134] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1126 19:36:23.304218   15626 system_pods.go:89] "kube-proxy-jvtzp" [4315ba8f-bd42-4af5-99ad-333bfd119723] Running
	I1126 19:36:23.304230   15626 system_pods.go:89] "kube-scheduler-addons-368879" [53644990-e6e9-4c0b-948c-60ddae17c928] Running
	I1126 19:36:23.304238   15626 system_pods.go:89] "metrics-server-85b7d694d7-mnzc2" [ef15070b-c3d9-4b20-b2b3-e53708b4a9a6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1126 19:36:23.304250   15626 system_pods.go:89] "nvidia-device-plugin-daemonset-jr6zz" [3a661477-c629-4de7-b047-25a87d532f45] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1126 19:36:23.304260   15626 system_pods.go:89] "registry-6b586f9694-4kzdl" [09b5cb7a-fb98-4680-a679-8c1716e1f038] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1126 19:36:23.304273   15626 system_pods.go:89] "registry-creds-764b6fb674-rspjs" [3fc9a944-7747-4626-b4ca-0f2e048703fd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1126 19:36:23.304283   15626 system_pods.go:89] "registry-proxy-lcrcc" [62d90aca-8e64-4089-83fd-bc39957164ba] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1126 19:36:23.304297   15626 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2lfj6" [a495ae59-667b-4e3b-914d-f8f925d11a5d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1126 19:36:23.304311   15626 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4kvzv" [f3968d51-d680-46be-bb2a-7fa01e497530] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1126 19:36:23.304326   15626 system_pods.go:89] "storage-provisioner" [e959d9e7-8cb7-4ca2-963a-7a0c7d416565] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 19:36:23.304348   15626 retry.go:31] will retry after 284.583109ms: missing components: kube-dns
	I1126 19:36:23.402386   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:23.513891   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:23.554235   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:23.554251   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:23.616167   15626 system_pods.go:86] 20 kube-system pods found
	I1126 19:36:23.616193   15626 system_pods.go:89] "amd-gpu-device-plugin-gj5pg" [5041bbf6-8005-42bc-8769-90708d6711d1] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1126 19:36:23.616202   15626 system_pods.go:89] "coredns-66bc5c9577-rv6zq" [fda827e5-458e-47e4-856b-f300a1b580aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 19:36:23.616208   15626 system_pods.go:89] "csi-hostpath-attacher-0" [3ce365e2-f149-4dfe-8c32-92f49c9d6157] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1126 19:36:23.616214   15626 system_pods.go:89] "csi-hostpath-resizer-0" [8a26132b-ecf3-46aa-aeea-30a0a1aaf322] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1126 19:36:23.616219   15626 system_pods.go:89] "csi-hostpathplugin-4cdfn" [dd181b66-dc21-4179-9580-ca4f0f403bb9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1126 19:36:23.616223   15626 system_pods.go:89] "etcd-addons-368879" [8e5bd70d-0fd3-492d-b6b2-d2c4e787b008] Running
	I1126 19:36:23.616229   15626 system_pods.go:89] "kindnet-dqhsm" [0e21747b-887f-42ba-946a-ce6d8aaaf19a] Running
	I1126 19:36:23.616233   15626 system_pods.go:89] "kube-apiserver-addons-368879" [688a9861-a405-428e-b035-60d971e9c639] Running
	I1126 19:36:23.616237   15626 system_pods.go:89] "kube-controller-manager-addons-368879" [f6dd02aa-10ac-42fc-b22c-e403a4be5178] Running
	I1126 19:36:23.616246   15626 system_pods.go:89] "kube-ingress-dns-minikube" [78401885-fa6d-4da9-8f6b-90e399e33134] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1126 19:36:23.616250   15626 system_pods.go:89] "kube-proxy-jvtzp" [4315ba8f-bd42-4af5-99ad-333bfd119723] Running
	I1126 19:36:23.616254   15626 system_pods.go:89] "kube-scheduler-addons-368879" [53644990-e6e9-4c0b-948c-60ddae17c928] Running
	I1126 19:36:23.616258   15626 system_pods.go:89] "metrics-server-85b7d694d7-mnzc2" [ef15070b-c3d9-4b20-b2b3-e53708b4a9a6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1126 19:36:23.616263   15626 system_pods.go:89] "nvidia-device-plugin-daemonset-jr6zz" [3a661477-c629-4de7-b047-25a87d532f45] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1126 19:36:23.616274   15626 system_pods.go:89] "registry-6b586f9694-4kzdl" [09b5cb7a-fb98-4680-a679-8c1716e1f038] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1126 19:36:23.616282   15626 system_pods.go:89] "registry-creds-764b6fb674-rspjs" [3fc9a944-7747-4626-b4ca-0f2e048703fd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1126 19:36:23.616293   15626 system_pods.go:89] "registry-proxy-lcrcc" [62d90aca-8e64-4089-83fd-bc39957164ba] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1126 19:36:23.616303   15626 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2lfj6" [a495ae59-667b-4e3b-914d-f8f925d11a5d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1126 19:36:23.616314   15626 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4kvzv" [f3968d51-d680-46be-bb2a-7fa01e497530] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1126 19:36:23.616323   15626 system_pods.go:89] "storage-provisioner" [e959d9e7-8cb7-4ca2-963a-7a0c7d416565] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 19:36:23.616339   15626 retry.go:31] will retry after 333.768916ms: missing components: kube-dns
	I1126 19:36:23.815834   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:23.954773   15626 system_pods.go:86] 20 kube-system pods found
	I1126 19:36:23.954803   15626 system_pods.go:89] "amd-gpu-device-plugin-gj5pg" [5041bbf6-8005-42bc-8769-90708d6711d1] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1126 19:36:23.954812   15626 system_pods.go:89] "coredns-66bc5c9577-rv6zq" [fda827e5-458e-47e4-856b-f300a1b580aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 19:36:23.954818   15626 system_pods.go:89] "csi-hostpath-attacher-0" [3ce365e2-f149-4dfe-8c32-92f49c9d6157] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1126 19:36:23.954823   15626 system_pods.go:89] "csi-hostpath-resizer-0" [8a26132b-ecf3-46aa-aeea-30a0a1aaf322] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1126 19:36:23.954829   15626 system_pods.go:89] "csi-hostpathplugin-4cdfn" [dd181b66-dc21-4179-9580-ca4f0f403bb9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1126 19:36:23.954834   15626 system_pods.go:89] "etcd-addons-368879" [8e5bd70d-0fd3-492d-b6b2-d2c4e787b008] Running
	I1126 19:36:23.954838   15626 system_pods.go:89] "kindnet-dqhsm" [0e21747b-887f-42ba-946a-ce6d8aaaf19a] Running
	I1126 19:36:23.954842   15626 system_pods.go:89] "kube-apiserver-addons-368879" [688a9861-a405-428e-b035-60d971e9c639] Running
	I1126 19:36:23.954848   15626 system_pods.go:89] "kube-controller-manager-addons-368879" [f6dd02aa-10ac-42fc-b22c-e403a4be5178] Running
	I1126 19:36:23.954853   15626 system_pods.go:89] "kube-ingress-dns-minikube" [78401885-fa6d-4da9-8f6b-90e399e33134] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1126 19:36:23.954860   15626 system_pods.go:89] "kube-proxy-jvtzp" [4315ba8f-bd42-4af5-99ad-333bfd119723] Running
	I1126 19:36:23.954863   15626 system_pods.go:89] "kube-scheduler-addons-368879" [53644990-e6e9-4c0b-948c-60ddae17c928] Running
	I1126 19:36:23.954868   15626 system_pods.go:89] "metrics-server-85b7d694d7-mnzc2" [ef15070b-c3d9-4b20-b2b3-e53708b4a9a6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1126 19:36:23.954873   15626 system_pods.go:89] "nvidia-device-plugin-daemonset-jr6zz" [3a661477-c629-4de7-b047-25a87d532f45] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1126 19:36:23.954880   15626 system_pods.go:89] "registry-6b586f9694-4kzdl" [09b5cb7a-fb98-4680-a679-8c1716e1f038] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1126 19:36:23.954885   15626 system_pods.go:89] "registry-creds-764b6fb674-rspjs" [3fc9a944-7747-4626-b4ca-0f2e048703fd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1126 19:36:23.954893   15626 system_pods.go:89] "registry-proxy-lcrcc" [62d90aca-8e64-4089-83fd-bc39957164ba] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1126 19:36:23.954898   15626 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2lfj6" [a495ae59-667b-4e3b-914d-f8f925d11a5d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1126 19:36:23.954906   15626 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4kvzv" [f3968d51-d680-46be-bb2a-7fa01e497530] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1126 19:36:23.954911   15626 system_pods.go:89] "storage-provisioner" [e959d9e7-8cb7-4ca2-963a-7a0c7d416565] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 19:36:23.954927   15626 retry.go:31] will retry after 606.877014ms: missing components: kube-dns
	I1126 19:36:24.013633   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:24.054647   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:24.054764   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:24.315107   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:24.514383   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:24.553847   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:24.554382   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:24.565987   15626 system_pods.go:86] 20 kube-system pods found
	I1126 19:36:24.566017   15626 system_pods.go:89] "amd-gpu-device-plugin-gj5pg" [5041bbf6-8005-42bc-8769-90708d6711d1] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1126 19:36:24.566027   15626 system_pods.go:89] "coredns-66bc5c9577-rv6zq" [fda827e5-458e-47e4-856b-f300a1b580aa] Running
	I1126 19:36:24.566038   15626 system_pods.go:89] "csi-hostpath-attacher-0" [3ce365e2-f149-4dfe-8c32-92f49c9d6157] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1126 19:36:24.566049   15626 system_pods.go:89] "csi-hostpath-resizer-0" [8a26132b-ecf3-46aa-aeea-30a0a1aaf322] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1126 19:36:24.566057   15626 system_pods.go:89] "csi-hostpathplugin-4cdfn" [dd181b66-dc21-4179-9580-ca4f0f403bb9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1126 19:36:24.566066   15626 system_pods.go:89] "etcd-addons-368879" [8e5bd70d-0fd3-492d-b6b2-d2c4e787b008] Running
	I1126 19:36:24.566075   15626 system_pods.go:89] "kindnet-dqhsm" [0e21747b-887f-42ba-946a-ce6d8aaaf19a] Running
	I1126 19:36:24.566081   15626 system_pods.go:89] "kube-apiserver-addons-368879" [688a9861-a405-428e-b035-60d971e9c639] Running
	I1126 19:36:24.566091   15626 system_pods.go:89] "kube-controller-manager-addons-368879" [f6dd02aa-10ac-42fc-b22c-e403a4be5178] Running
	I1126 19:36:24.566099   15626 system_pods.go:89] "kube-ingress-dns-minikube" [78401885-fa6d-4da9-8f6b-90e399e33134] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1126 19:36:24.566107   15626 system_pods.go:89] "kube-proxy-jvtzp" [4315ba8f-bd42-4af5-99ad-333bfd119723] Running
	I1126 19:36:24.566113   15626 system_pods.go:89] "kube-scheduler-addons-368879" [53644990-e6e9-4c0b-948c-60ddae17c928] Running
	I1126 19:36:24.566136   15626 system_pods.go:89] "metrics-server-85b7d694d7-mnzc2" [ef15070b-c3d9-4b20-b2b3-e53708b4a9a6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1126 19:36:24.566148   15626 system_pods.go:89] "nvidia-device-plugin-daemonset-jr6zz" [3a661477-c629-4de7-b047-25a87d532f45] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1126 19:36:24.566156   15626 system_pods.go:89] "registry-6b586f9694-4kzdl" [09b5cb7a-fb98-4680-a679-8c1716e1f038] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1126 19:36:24.566169   15626 system_pods.go:89] "registry-creds-764b6fb674-rspjs" [3fc9a944-7747-4626-b4ca-0f2e048703fd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1126 19:36:24.566177   15626 system_pods.go:89] "registry-proxy-lcrcc" [62d90aca-8e64-4089-83fd-bc39957164ba] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1126 19:36:24.566187   15626 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2lfj6" [a495ae59-667b-4e3b-914d-f8f925d11a5d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1126 19:36:24.566195   15626 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4kvzv" [f3968d51-d680-46be-bb2a-7fa01e497530] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1126 19:36:24.566204   15626 system_pods.go:89] "storage-provisioner" [e959d9e7-8cb7-4ca2-963a-7a0c7d416565] Running
	I1126 19:36:24.566215   15626 system_pods.go:126] duration metric: took 1.495973725s to wait for k8s-apps to be running ...
	I1126 19:36:24.566229   15626 system_svc.go:44] waiting for kubelet service to be running ....
	I1126 19:36:24.566274   15626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 19:36:24.582229   15626 system_svc.go:56] duration metric: took 15.994101ms WaitForService to wait for kubelet
	I1126 19:36:24.582253   15626 kubeadm.go:587] duration metric: took 43.056719279s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1126 19:36:24.582272   15626 node_conditions.go:102] verifying NodePressure condition ...
	I1126 19:36:24.584620   15626 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1126 19:36:24.584640   15626 node_conditions.go:123] node cpu capacity is 8
	I1126 19:36:24.584653   15626 node_conditions.go:105] duration metric: took 2.37555ms to run NodePressure ...
	I1126 19:36:24.584668   15626 start.go:242] waiting for startup goroutines ...
	I1126 19:36:24.815784   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:25.015221   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:25.053762   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:25.053790   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:25.316075   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:25.514283   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:25.615141   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:25.615191   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:25.814593   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:26.014582   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:26.054000   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:26.054023   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:26.315693   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:26.514778   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:26.554201   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:26.554330   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:26.814993   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:27.014761   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:27.054741   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:27.055083   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:27.315109   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:27.514172   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:27.553997   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:27.554486   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:27.815740   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:28.014966   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:28.110945   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:28.111277   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:28.314846   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:28.514562   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:28.553539   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:28.553642   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:28.814719   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:29.014674   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:29.054384   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:29.054532   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:29.314797   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:29.515217   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:29.554194   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:29.554283   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:29.814739   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:30.014744   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:30.053864   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:30.053901   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:30.315532   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:30.514389   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:30.553774   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:30.554264   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:30.815046   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:31.013643   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:31.054184   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:31.054409   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:31.315228   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:31.515757   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:31.554078   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:31.554125   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:31.816299   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:32.014298   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:32.115308   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:32.115337   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:32.314529   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:32.514237   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:32.553783   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:32.554289   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:32.815789   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:33.014811   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:33.054411   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:33.054558   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:33.315080   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:33.514413   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:33.553604   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:33.553614   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:33.814913   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:34.014860   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:34.054146   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:34.054320   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:34.314733   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:34.514352   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:34.553344   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:34.553360   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:34.814647   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:35.014609   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:35.053827   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:35.053846   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:35.315897   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:35.515346   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:35.553643   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:35.553650   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:35.815534   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:36.014669   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:36.054214   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:36.054416   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:36.315234   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:36.514322   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:36.553807   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:36.554084   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:36.815667   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:37.014229   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:37.053605   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:37.053668   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:37.315589   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:37.514860   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:37.554197   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:37.554209   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:37.815150   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:38.014198   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:38.053994   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:38.054232   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:38.314909   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:38.515147   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:38.553226   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:38.553981   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:38.815494   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:39.014492   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:39.054189   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:39.054434   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:39.316231   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:39.578145   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:39.578197   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:39.578264   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:39.814561   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:40.014142   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:40.053639   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:40.054095   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:40.314690   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:40.514542   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:40.553413   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:40.553509   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:40.815099   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:41.014084   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:41.053720   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:41.054380   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:41.315058   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:41.513582   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:41.553404   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:41.553424   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:41.815407   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:42.014483   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:42.053646   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:42.053887   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:42.315425   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:42.514195   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:42.553746   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:42.614980   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:42.815235   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:43.014159   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:43.053404   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:43.054418   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:43.315893   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:43.514351   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:43.553390   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:43.554052   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:43.814761   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:44.014903   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:44.054085   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:44.054154   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:44.314532   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:44.514632   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:44.614938   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:44.614976   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:44.815010   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:45.015034   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:45.056898   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:45.057071   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:45.314689   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:45.522101   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:45.560030   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:45.560084   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:45.814776   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:46.015831   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:46.056254   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:46.056819   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:46.315378   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:46.514475   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:46.553955   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:46.554049   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:46.815793   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:47.015044   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:47.053891   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:47.054237   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:47.314836   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:47.514940   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:47.615209   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:47.615241   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:47.815268   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:48.014236   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:48.053741   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:48.054175   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:48.315208   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:48.514002   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:48.553143   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:48.553814   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:48.815227   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:49.013708   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:49.053812   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:49.053838   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:49.315592   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:49.514968   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:49.615834   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:49.615943   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:49.815846   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:50.014674   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:50.053804   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:50.053840   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:50.315187   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:50.514339   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:50.553916   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:50.554504   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:50.815340   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:51.014420   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:51.053955   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:51.054183   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:51.314655   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:51.514506   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:51.615067   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:51.615102   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:51.815943   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:52.013909   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:52.054355   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:52.054428   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:52.314521   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:52.514076   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:52.553638   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:52.554429   15626 kapi.go:107] duration metric: took 1m9.503058212s to wait for kubernetes.io/minikube-addons=registry ...
	I1126 19:36:52.814952   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:53.015553   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:53.054367   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:53.315902   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:53.514837   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:53.554244   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:53.814844   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:54.014992   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:54.053606   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:54.314884   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:54.515414   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:54.554120   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:54.815774   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:55.014588   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:55.053794   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:55.317188   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:55.514386   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:55.553997   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:55.815499   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:56.015503   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:56.053965   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:56.315897   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:56.514691   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:56.553935   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:56.815528   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:57.014349   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:57.053300   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:57.316066   15626 kapi.go:107] duration metric: took 1m7.503916703s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1126 19:36:57.318195   15626 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-368879 cluster.
	I1126 19:36:57.319372   15626 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1126 19:36:57.320737   15626 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1126 19:36:57.515414   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:57.556444   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:58.014806   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:58.054064   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:58.515486   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:58.553717   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:59.014223   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:59.054372   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:59.514872   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:59.554099   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:00.077932   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:00.077959   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:00.515354   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:00.553590   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:01.014946   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:01.054451   15626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:01.514298   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:01.553335   15626 kapi.go:107] duration metric: took 1m18.502553999s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1126 19:37:02.038107   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:02.515106   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:03.014838   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:03.513660   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:04.014840   15626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:04.514932   15626 kapi.go:107] duration metric: took 1m21.003818866s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1126 19:37:04.516240   15626 out.go:179] * Enabled addons: registry-creds, ingress-dns, cloud-spanner, storage-provisioner-rancher, nvidia-device-plugin, amd-gpu-device-plugin, storage-provisioner, metrics-server, yakd, inspektor-gadget, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1126 19:37:04.517372   15626 addons.go:530] duration metric: took 1m22.991800608s for enable addons: enabled=[registry-creds ingress-dns cloud-spanner storage-provisioner-rancher nvidia-device-plugin amd-gpu-device-plugin storage-provisioner metrics-server yakd inspektor-gadget volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1126 19:37:04.517412   15626 start.go:247] waiting for cluster config update ...
	I1126 19:37:04.517442   15626 start.go:256] writing updated cluster config ...
	I1126 19:37:04.517745   15626 ssh_runner.go:195] Run: rm -f paused
	I1126 19:37:04.521491   15626 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 19:37:04.525229   15626 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rv6zq" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 19:37:04.529042   15626 pod_ready.go:94] pod "coredns-66bc5c9577-rv6zq" is "Ready"
	I1126 19:37:04.529062   15626 pod_ready.go:86] duration metric: took 3.813945ms for pod "coredns-66bc5c9577-rv6zq" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 19:37:04.530697   15626 pod_ready.go:83] waiting for pod "etcd-addons-368879" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 19:37:04.533655   15626 pod_ready.go:94] pod "etcd-addons-368879" is "Ready"
	I1126 19:37:04.533675   15626 pod_ready.go:86] duration metric: took 2.961786ms for pod "etcd-addons-368879" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 19:37:04.535214   15626 pod_ready.go:83] waiting for pod "kube-apiserver-addons-368879" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 19:37:04.538174   15626 pod_ready.go:94] pod "kube-apiserver-addons-368879" is "Ready"
	I1126 19:37:04.538190   15626 pod_ready.go:86] duration metric: took 2.960721ms for pod "kube-apiserver-addons-368879" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 19:37:04.539680   15626 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-368879" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 19:37:04.925214   15626 pod_ready.go:94] pod "kube-controller-manager-addons-368879" is "Ready"
	I1126 19:37:04.925245   15626 pod_ready.go:86] duration metric: took 385.549536ms for pod "kube-controller-manager-addons-368879" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 19:37:05.125035   15626 pod_ready.go:83] waiting for pod "kube-proxy-jvtzp" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 19:37:05.527857   15626 pod_ready.go:94] pod "kube-proxy-jvtzp" is "Ready"
	I1126 19:37:05.527883   15626 pod_ready.go:86] duration metric: took 402.827458ms for pod "kube-proxy-jvtzp" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 19:37:05.724325   15626 pod_ready.go:83] waiting for pod "kube-scheduler-addons-368879" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 19:37:06.124201   15626 pod_ready.go:94] pod "kube-scheduler-addons-368879" is "Ready"
	I1126 19:37:06.124225   15626 pod_ready.go:86] duration metric: took 399.880093ms for pod "kube-scheduler-addons-368879" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 19:37:06.124240   15626 pod_ready.go:40] duration metric: took 1.602722661s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 19:37:06.167560   15626 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1126 19:37:06.169559   15626 out.go:179] * Done! kubectl is now configured to use "addons-368879" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 26 19:37:07 addons-368879 crio[774]: time="2025-11-26T19:37:07.022620918Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3a991131-7bdf-4e6d-be46-34b13eb40572 name=/runtime.v1.ImageService/PullImage
	Nov 26 19:37:07 addons-368879 crio[774]: time="2025-11-26T19:37:07.023942478Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 26 19:37:07 addons-368879 crio[774]: time="2025-11-26T19:37:07.763547264Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=3a991131-7bdf-4e6d-be46-34b13eb40572 name=/runtime.v1.ImageService/PullImage
	Nov 26 19:37:07 addons-368879 crio[774]: time="2025-11-26T19:37:07.764036578Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c41be1d6-b261-4693-abef-187d299d6b5b name=/runtime.v1.ImageService/ImageStatus
	Nov 26 19:37:07 addons-368879 crio[774]: time="2025-11-26T19:37:07.765255682Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=aec5edad-b56f-4578-a79c-fe04619fabef name=/runtime.v1.ImageService/ImageStatus
	Nov 26 19:37:07 addons-368879 crio[774]: time="2025-11-26T19:37:07.768179702Z" level=info msg="Creating container: default/busybox/busybox" id=c200dc7a-16a3-4d89-9c17-c353c32704d1 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 19:37:07 addons-368879 crio[774]: time="2025-11-26T19:37:07.768285362Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 19:37:07 addons-368879 crio[774]: time="2025-11-26T19:37:07.773173063Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 19:37:07 addons-368879 crio[774]: time="2025-11-26T19:37:07.773564577Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 19:37:07 addons-368879 crio[774]: time="2025-11-26T19:37:07.803050411Z" level=info msg="Created container 564d1ab3e2c23a42198fb2f071df0363915eeadcc0d9e331bdb93fac07dcfab3: default/busybox/busybox" id=c200dc7a-16a3-4d89-9c17-c353c32704d1 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 19:37:07 addons-368879 crio[774]: time="2025-11-26T19:37:07.803558758Z" level=info msg="Starting container: 564d1ab3e2c23a42198fb2f071df0363915eeadcc0d9e331bdb93fac07dcfab3" id=bc8c7c5d-049d-431d-bd95-cfdbee1e1a53 name=/runtime.v1.RuntimeService/StartContainer
	Nov 26 19:37:07 addons-368879 crio[774]: time="2025-11-26T19:37:07.805283293Z" level=info msg="Started container" PID=6286 containerID=564d1ab3e2c23a42198fb2f071df0363915eeadcc0d9e331bdb93fac07dcfab3 description=default/busybox/busybox id=bc8c7c5d-049d-431d-bd95-cfdbee1e1a53 name=/runtime.v1.RuntimeService/StartContainer sandboxID=25e6f87e1f138488748ca045e3233aa006e3a24837ba1e9b8486e878bb7512f8
	Nov 26 19:37:15 addons-368879 crio[774]: time="2025-11-26T19:37:15.916315539Z" level=info msg="Running pod sandbox: default/nginx/POD" id=0d2f821f-d4c7-4167-bbed-860e1002b23f name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 26 19:37:15 addons-368879 crio[774]: time="2025-11-26T19:37:15.916391191Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 19:37:15 addons-368879 crio[774]: time="2025-11-26T19:37:15.924069737Z" level=info msg="Got pod network &{Name:nginx Namespace:default ID:0f59abf6693cfc8baa546f023a99fac041eb29d2be1e8616d46051413e8ca2db UID:05e3f0cf-f584-45a4-8207-05bff33cd676 NetNS:/var/run/netns/dcf7df4f-6e86-403f-ba7f-5f167fac5155 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0001285d0}] Aliases:map[]}"
	Nov 26 19:37:15 addons-368879 crio[774]: time="2025-11-26T19:37:15.92412317Z" level=info msg="Adding pod default_nginx to CNI network \"kindnet\" (type=ptp)"
	Nov 26 19:37:15 addons-368879 crio[774]: time="2025-11-26T19:37:15.935523052Z" level=info msg="Got pod network &{Name:nginx Namespace:default ID:0f59abf6693cfc8baa546f023a99fac041eb29d2be1e8616d46051413e8ca2db UID:05e3f0cf-f584-45a4-8207-05bff33cd676 NetNS:/var/run/netns/dcf7df4f-6e86-403f-ba7f-5f167fac5155 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0001285d0}] Aliases:map[]}"
	Nov 26 19:37:15 addons-368879 crio[774]: time="2025-11-26T19:37:15.93564566Z" level=info msg="Checking pod default_nginx for CNI network kindnet (type=ptp)"
	Nov 26 19:37:15 addons-368879 crio[774]: time="2025-11-26T19:37:15.936685798Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 26 19:37:15 addons-368879 crio[774]: time="2025-11-26T19:37:15.937722763Z" level=info msg="Ran pod sandbox 0f59abf6693cfc8baa546f023a99fac041eb29d2be1e8616d46051413e8ca2db with infra container: default/nginx/POD" id=0d2f821f-d4c7-4167-bbed-860e1002b23f name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 26 19:37:15 addons-368879 crio[774]: time="2025-11-26T19:37:15.938792348Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=c5074359-0717-45ca-9532-910c9cb3baf0 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 19:37:15 addons-368879 crio[774]: time="2025-11-26T19:37:15.938953736Z" level=info msg="Image docker.io/nginx:alpine not found" id=c5074359-0717-45ca-9532-910c9cb3baf0 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 19:37:15 addons-368879 crio[774]: time="2025-11-26T19:37:15.938994943Z" level=info msg="Neither image nor artfiact docker.io/nginx:alpine found" id=c5074359-0717-45ca-9532-910c9cb3baf0 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 19:37:15 addons-368879 crio[774]: time="2025-11-26T19:37:15.939606039Z" level=info msg="Pulling image: docker.io/nginx:alpine" id=048216b0-580a-453d-812d-89036e6c82cb name=/runtime.v1.ImageService/PullImage
	Nov 26 19:37:15 addons-368879 crio[774]: time="2025-11-26T19:37:15.941194726Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	564d1ab3e2c23       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          8 seconds ago        Running             busybox                                  0                   25e6f87e1f138       busybox                                    default
	cff11b2555930       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          12 seconds ago       Running             csi-snapshotter                          0                   476bf7ba4fae0       csi-hostpathplugin-4cdfn                   kube-system
	212fd16c128eb       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          14 seconds ago       Running             csi-provisioner                          0                   476bf7ba4fae0       csi-hostpathplugin-4cdfn                   kube-system
	9ad67d88c52b2       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            15 seconds ago       Running             liveness-probe                           0                   476bf7ba4fae0       csi-hostpathplugin-4cdfn                   kube-system
	05f619672cbe3       registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27                             15 seconds ago       Running             controller                               0                   09a8cfd5a1366       ingress-nginx-controller-6c8bf45fb-f6sg8   ingress-nginx
	61531082a7094       884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45                                                                             19 seconds ago       Exited              patch                                    2                   7549b111a1684       gcp-auth-certs-patch-x56k5                 gcp-auth
	73beca01f1a18       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 19 seconds ago       Running             gcp-auth                                 0                   fb1126009b9a6       gcp-auth-78565c9fb4-277vt                  gcp-auth
	064c32b317de7       884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45                                                                             20 seconds ago       Exited              patch                                    2                   2bc3d288cb5f8       ingress-nginx-admission-patch-8mvpf        ingress-nginx
	36ba93a3abe19       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           20 seconds ago       Running             hostpath                                 0                   476bf7ba4fae0       csi-hostpathplugin-4cdfn                   kube-system
	d9b60fb8e7242       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:9a12b3c1d155bb081ff408a9b6c1cec18573c967e0c3917225b81ffe11c0b7f2                            21 seconds ago       Running             gadget                                   0                   c4c915d612293       gadget-rmxlg                               gadget
	4197c12bab9b3       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                24 seconds ago       Running             node-driver-registrar                    0                   476bf7ba4fae0       csi-hostpathplugin-4cdfn                   kube-system
	714962454fedb       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              25 seconds ago       Running             registry-proxy                           0                   dd1eb951e9ba6       registry-proxy-lcrcc                       kube-system
	10ce256fa3fd9       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   26 seconds ago       Exited              create                                   0                   b37cda4596ec5       gcp-auth-certs-create-ljwzp                gcp-auth
	603eac3a5db35       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     26 seconds ago       Running             amd-gpu-device-plugin                    0                   22e90389a3e73       amd-gpu-device-plugin-gj5pg                kube-system
	d97dad65e0c22       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             28 seconds ago       Running             csi-attacher                             0                   b02c8daa86aab       csi-hostpath-attacher-0                    kube-system
	e825b2f37651c       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     28 seconds ago       Running             nvidia-device-plugin-ctr                 0                   49d93fe45ce5c       nvidia-device-plugin-daemonset-jr6zz       kube-system
	7dfc385a20d46       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      31 seconds ago       Running             volume-snapshot-controller               0                   d2d21922bcfe7       snapshot-controller-7d9fbc56b8-4kvzv       kube-system
	47283ac77595b       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      32 seconds ago       Running             volume-snapshot-controller               0                   55b337aac826c       snapshot-controller-7d9fbc56b8-2lfj6       kube-system
	5a22bb7b95033       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   33 seconds ago       Running             csi-external-health-monitor-controller   0                   476bf7ba4fae0       csi-hostpathplugin-4cdfn                   kube-system
	4001803d68503       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   33 seconds ago       Exited              create                                   0                   03ccd7fc8f711       ingress-nginx-admission-create-tbk6s       ingress-nginx
	efa536fb3d778       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              34 seconds ago       Running             csi-resizer                              0                   7a5905c54d8fe       csi-hostpath-resizer-0                     kube-system
	65470a1503151       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           36 seconds ago       Running             registry                                 0                   e298b73fdbb9f       registry-6b586f9694-4kzdl                  kube-system
	eb50e6ab9debf       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               38 seconds ago       Running             minikube-ingress-dns                     0                   5ed74000c985f       kube-ingress-dns-minikube                  kube-system
	1eaf62f13549a       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              44 seconds ago       Running             yakd                                     0                   373ebc59fb5fc       yakd-dashboard-5ff678cb9-zkdvc             yakd-dashboard
	2cf89df41d649       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             48 seconds ago       Running             local-path-provisioner                   0                   e54e661b8694f       local-path-provisioner-648f6765c9-4lngh    local-path-storage
	719640c6c4cf6       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        49 seconds ago       Running             metrics-server                           0                   d8b84982c82d2       metrics-server-85b7d694d7-mnzc2            kube-system
	93006bab5753d       gcr.io/cloud-spanner-emulator/emulator@sha256:22a4d5b0f97bd0c2ee20da342493c5a60e40b4d62ec20c174cb32ff4ee1f65bf                               50 seconds ago       Running             cloud-spanner-emulator                   0                   ed79da8d73539       cloud-spanner-emulator-5bdddb765-wqlhm     default
	d64fe5dcd9941       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             53 seconds ago       Running             coredns                                  0                   6cdc9a81ff136       coredns-66bc5c9577-rv6zq                   kube-system
	25e48df5dfb4b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             53 seconds ago       Running             storage-provisioner                      0                   1a532dabf38f0       storage-provisioner                        kube-system
	59ceeea3b62d8       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             About a minute ago   Running             kube-proxy                               0                   6b827d89c503d       kube-proxy-jvtzp                           kube-system
	c71770537fdbd       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             About a minute ago   Running             kindnet-cni                              0                   bbfa93b5771e3       kindnet-dqhsm                              kube-system
	6d9b40c465aff       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             About a minute ago   Running             kube-apiserver                           0                   8023c2e989f33       kube-apiserver-addons-368879               kube-system
	00f8c7ca3495a       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             About a minute ago   Running             kube-scheduler                           0                   c6c6b7410e26b       kube-scheduler-addons-368879               kube-system
	f7ace30aee7af       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             About a minute ago   Running             etcd                                     0                   b9422d1b1b1d5       etcd-addons-368879                         kube-system
	beecb43fac96b       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             About a minute ago   Running             kube-controller-manager                  0                   f2bffafd455a4       kube-controller-manager-addons-368879      kube-system
	
	
	==> coredns [d64fe5dcd9941dd86a15e1a24ecd4bcd3f1c12d60fa89d08729cb02ae1f3a859] <==
	[INFO] 10.244.0.19:36236 - 4431 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000173054s
	[INFO] 10.244.0.19:44538 - 6660 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000116934s
	[INFO] 10.244.0.19:44538 - 6386 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000147324s
	[INFO] 10.244.0.19:36701 - 22540 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.000074205s
	[INFO] 10.244.0.19:36701 - 22299 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.00009602s
	[INFO] 10.244.0.19:60139 - 56248 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000069276s
	[INFO] 10.244.0.19:60139 - 56026 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000093711s
	[INFO] 10.244.0.19:41962 - 18752 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000069218s
	[INFO] 10.244.0.19:41962 - 18592 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000107829s
	[INFO] 10.244.0.19:48968 - 65030 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000125915s
	[INFO] 10.244.0.19:48968 - 65438 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000164718s
	[INFO] 10.244.0.22:41103 - 49716 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000230012s
	[INFO] 10.244.0.22:33835 - 3617 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000267119s
	[INFO] 10.244.0.22:51296 - 52191 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000166609s
	[INFO] 10.244.0.22:40551 - 6028 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000186782s
	[INFO] 10.244.0.22:60634 - 51608 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000116391s
	[INFO] 10.244.0.22:46386 - 22532 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000172146s
	[INFO] 10.244.0.22:60922 - 59753 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.004420742s
	[INFO] 10.244.0.22:40374 - 14087 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.006018846s
	[INFO] 10.244.0.22:46867 - 11971 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005200066s
	[INFO] 10.244.0.22:49741 - 61297 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005543734s
	[INFO] 10.244.0.22:51466 - 51251 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005132371s
	[INFO] 10.244.0.22:51594 - 34280 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005729549s
	[INFO] 10.244.0.22:59217 - 3605 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001632058s
	[INFO] 10.244.0.22:42526 - 38586 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002172618s
	
	
	==> describe nodes <==
	Name:               addons-368879
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-368879
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970
	                    minikube.k8s.io/name=addons-368879
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_26T19_35_36_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-368879
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-368879"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 26 Nov 2025 19:35:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-368879
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 26 Nov 2025 19:37:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 26 Nov 2025 19:36:47 +0000   Wed, 26 Nov 2025 19:35:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 26 Nov 2025 19:36:47 +0000   Wed, 26 Nov 2025 19:35:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 26 Nov 2025 19:36:47 +0000   Wed, 26 Nov 2025 19:35:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 26 Nov 2025 19:36:47 +0000   Wed, 26 Nov 2025 19:36:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-368879
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                3b9ff54c-dae7-424b-a157-0391af8d1944
	  Boot ID:                    4ab83601-3d95-4490-96bc-14b416ba2714
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  default                     cloud-spanner-emulator-5bdddb765-wqlhm      0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  gadget                      gadget-rmxlg                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  gcp-auth                    gcp-auth-78565c9fb4-277vt                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-f6sg8    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         93s
	  kube-system                 amd-gpu-device-plugin-gj5pg                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kube-system                 coredns-66bc5c9577-rv6zq                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     95s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 csi-hostpathplugin-4cdfn                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kube-system                 etcd-addons-368879                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         101s
	  kube-system                 kindnet-dqhsm                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      95s
	  kube-system                 kube-apiserver-addons-368879                250m (3%)     0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 kube-controller-manager-addons-368879       200m (2%)     0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 kube-proxy-jvtzp                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 kube-scheduler-addons-368879                100m (1%)     0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 metrics-server-85b7d694d7-mnzc2             100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         94s
	  kube-system                 nvidia-device-plugin-daemonset-jr6zz        0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kube-system                 registry-6b586f9694-4kzdl                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 registry-creds-764b6fb674-rspjs             0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 registry-proxy-lcrcc                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kube-system                 snapshot-controller-7d9fbc56b8-2lfj6        0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 snapshot-controller-7d9fbc56b8-4kvzv        0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  local-path-storage          local-path-provisioner-648f6765c9-4lngh     0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-zkdvc              0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     94s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 93s   kube-proxy       
	  Normal  Starting                 101s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  101s  kubelet          Node addons-368879 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    101s  kubelet          Node addons-368879 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     101s  kubelet          Node addons-368879 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           96s   node-controller  Node addons-368879 event: Registered Node addons-368879 in Controller
	  Normal  NodeReady                54s   kubelet          Node addons-368879 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov26 19:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001889] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.371488] i8042: Warning: Keylock active
	[  +0.012266] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497745] block sda: the capability attribute has been deprecated.
	[  +0.091832] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023778] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.010162] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [f7ace30aee7afbfe85c6ffe68e9b951559babdf0fc3cfb0b08e0ecaba1352160] <==
	{"level":"warn","ts":"2025-11-26T19:35:33.043086Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:35:33.048946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:35:33.055129Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:35:33.064524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:35:33.073541Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:35:33.079777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:35:33.085423Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:35:33.091890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:35:33.097780Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:35:33.105442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:35:33.111988Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:35:33.117821Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:35:33.124290Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:35:33.130013Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:35:33.138602Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:35:33.144424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:35:33.163885Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:35:33.169585Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:35:33.175323Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:35:33.227391Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:35:44.037543Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:35:44.043691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:36:10.608952Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:36:10.629871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:36:10.635762Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54812","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [73beca01f1a189371bd5d5cacdcac2a900e8b76f1a10e8424c5b84daf53c68f4] <==
	2025/11/26 19:36:56 GCP Auth Webhook started!
	2025/11/26 19:37:06 Ready to marshal response ...
	2025/11/26 19:37:06 Ready to write response ...
	2025/11/26 19:37:06 Ready to marshal response ...
	2025/11/26 19:37:06 Ready to write response ...
	2025/11/26 19:37:06 Ready to marshal response ...
	2025/11/26 19:37:06 Ready to write response ...
	2025/11/26 19:37:15 Ready to marshal response ...
	2025/11/26 19:37:15 Ready to write response ...
	
	
	==> kernel <==
	 19:37:16 up 19 min,  0 user,  load average: 2.28, 0.93, 0.34
	Linux addons-368879 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c71770537fdbd023344c0899f0ec3b332c10bdffa104f53bcb64a05e3456fdff] <==
	I1126 19:35:42.428414       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-26T19:35:42Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1126 19:35:42.686549       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1126 19:35:42.686567       1 controller.go:381] "Waiting for informer caches to sync"
	I1126 19:35:42.686577       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1126 19:35:42.686688       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1126 19:36:12.686165       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1126 19:36:12.686306       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1126 19:36:12.687232       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1126 19:36:12.689399       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1126 19:36:13.987680       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1126 19:36:13.987703       1 metrics.go:72] Registering metrics
	I1126 19:36:13.987758       1 controller.go:711] "Syncing nftables rules"
	I1126 19:36:22.693639       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 19:36:22.693679       1 main.go:301] handling current node
	I1126 19:36:32.686343       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 19:36:32.686397       1 main.go:301] handling current node
	I1126 19:36:42.686430       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 19:36:42.686478       1 main.go:301] handling current node
	I1126 19:36:52.686262       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 19:36:52.686288       1 main.go:301] handling current node
	I1126 19:37:02.686333       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 19:37:02.686373       1 main.go:301] handling current node
	I1126 19:37:12.686281       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 19:37:12.686323       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6d9b40c465aff763d4d0644216d13f8019e5ff223b6cd4d430832c2712521310] <==
	W1126 19:36:10.608882       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1126 19:36:10.615554       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1126 19:36:10.629847       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1126 19:36:10.635759       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1126 19:36:22.918825       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.218.116:443: connect: connection refused
	E1126 19:36:22.918868       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.218.116:443: connect: connection refused" logger="UnhandledError"
	W1126 19:36:22.918951       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.218.116:443: connect: connection refused
	E1126 19:36:22.918983       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.218.116:443: connect: connection refused" logger="UnhandledError"
	W1126 19:36:22.937144       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.218.116:443: connect: connection refused
	E1126 19:36:22.937183       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.218.116:443: connect: connection refused" logger="UnhandledError"
	W1126 19:36:22.947348       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.218.116:443: connect: connection refused
	E1126 19:36:22.947391       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.218.116:443: connect: connection refused" logger="UnhandledError"
	W1126 19:36:28.130713       1 handler_proxy.go:99] no RequestInfo found in the context
	E1126 19:36:28.130785       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1126 19:36:28.130812       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.61.54:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.61.54:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.61.54:443: connect: connection refused" logger="UnhandledError"
	E1126 19:36:28.132889       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.61.54:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.61.54:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.61.54:443: connect: connection refused" logger="UnhandledError"
	E1126 19:36:28.138748       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.61.54:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.61.54:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.61.54:443: connect: connection refused" logger="UnhandledError"
	I1126 19:36:28.182542       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1126 19:37:14.836570       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:39742: use of closed network connection
	E1126 19:37:14.980403       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:39766: use of closed network connection
	I1126 19:37:15.468362       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1126 19:37:15.648628       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.138.27"}
	
	
	==> kube-controller-manager [beecb43fac96b0dc52fe3fc4694c61a44626abc7a7d215655c08387e375b3e65] <==
	I1126 19:35:40.591943       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1126 19:35:40.592163       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1126 19:35:40.592180       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1126 19:35:40.592265       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1126 19:35:40.592935       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1126 19:35:40.592959       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1126 19:35:40.592978       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1126 19:35:40.592994       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1126 19:35:40.593048       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1126 19:35:40.593471       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1126 19:35:40.596056       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1126 19:35:40.597255       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1126 19:35:40.598410       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1126 19:35:40.600633       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1126 19:35:40.604864       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1126 19:35:40.615151       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1126 19:35:42.740667       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1126 19:36:10.602193       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1126 19:36:10.602320       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1126 19:36:10.602352       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1126 19:36:10.621994       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1126 19:36:10.625048       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1126 19:36:10.702441       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1126 19:36:10.725622       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1126 19:36:25.547699       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [59ceeea3b62d8764e5784e2b3a3de748607f552e64062869f959f25d6d89b19c] <==
	I1126 19:35:42.481846       1 server_linux.go:53] "Using iptables proxy"
	I1126 19:35:42.722541       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1126 19:35:42.829102       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1126 19:35:42.829136       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1126 19:35:42.829227       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1126 19:35:42.855108       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1126 19:35:42.855229       1 server_linux.go:132] "Using iptables Proxier"
	I1126 19:35:42.862509       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1126 19:35:42.869749       1 server.go:527] "Version info" version="v1.34.1"
	I1126 19:35:42.869810       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 19:35:42.873117       1 config.go:106] "Starting endpoint slice config controller"
	I1126 19:35:42.873147       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1126 19:35:42.873186       1 config.go:200] "Starting service config controller"
	I1126 19:35:42.873192       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1126 19:35:42.873347       1 config.go:403] "Starting serviceCIDR config controller"
	I1126 19:35:42.873364       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1126 19:35:42.873563       1 config.go:309] "Starting node config controller"
	I1126 19:35:42.873584       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1126 19:35:42.873592       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1126 19:35:42.974169       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1126 19:35:42.974206       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1126 19:35:42.974180       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [00f8c7ca3495a11998b111688472e6da4d0e62908b4fe1c119e083354e98f6e7] <==
	E1126 19:35:33.612940       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1126 19:35:33.612962       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1126 19:35:33.613028       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1126 19:35:33.613055       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1126 19:35:33.613217       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1126 19:35:33.616223       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1126 19:35:33.616253       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1126 19:35:33.616225       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1126 19:35:33.616356       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1126 19:35:33.616369       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1126 19:35:33.616377       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1126 19:35:33.616435       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1126 19:35:33.616514       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1126 19:35:33.616516       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1126 19:35:34.491405       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1126 19:35:34.519637       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1126 19:35:34.552530       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1126 19:35:34.620064       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1126 19:35:34.659688       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1126 19:35:34.701773       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1126 19:35:34.712886       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1126 19:35:34.753889       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1126 19:35:34.769786       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1126 19:35:34.785615       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1126 19:35:35.010803       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 26 19:36:54 addons-368879 kubelet[1272]: E1126 19:36:54.729333    1272 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Nov 26 19:36:54 addons-368879 kubelet[1272]: E1126 19:36:54.729420    1272 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3fc9a944-7747-4626-b4ca-0f2e048703fd-gcr-creds podName:3fc9a944-7747-4626-b4ca-0f2e048703fd nodeName:}" failed. No retries permitted until 2025-11-26 19:37:26.729401789 +0000 UTC m=+110.914741427 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/3fc9a944-7747-4626-b4ca-0f2e048703fd-gcr-creds") pod "registry-creds-764b6fb674-rspjs" (UID: "3fc9a944-7747-4626-b4ca-0f2e048703fd") : secret "registry-creds-gcr" not found
	Nov 26 19:36:55 addons-368879 kubelet[1272]: I1126 19:36:55.157504    1272 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-rmxlg" podStartSLOduration=65.786731064 podStartE2EDuration="1m13.157489898s" podCreationTimestamp="2025-11-26 19:35:42 +0000 UTC" firstStartedPulling="2025-11-26 19:36:47.458719307 +0000 UTC m=+71.644058926" lastFinishedPulling="2025-11-26 19:36:54.82947813 +0000 UTC m=+79.014817760" observedRunningTime="2025-11-26 19:36:55.157317653 +0000 UTC m=+79.342657332" watchObservedRunningTime="2025-11-26 19:36:55.157489898 +0000 UTC m=+79.342829539"
	Nov 26 19:36:55 addons-368879 kubelet[1272]: I1126 19:36:55.893741    1272 scope.go:117] "RemoveContainer" containerID="7ef45bbfbceee3938a917a57a32c3bc56aa0473c17e7840a6c8838ae2357d5b9"
	Nov 26 19:36:56 addons-368879 kubelet[1272]: I1126 19:36:56.156824    1272 scope.go:117] "RemoveContainer" containerID="7ef45bbfbceee3938a917a57a32c3bc56aa0473c17e7840a6c8838ae2357d5b9"
	Nov 26 19:36:56 addons-368879 kubelet[1272]: I1126 19:36:56.889959    1272 scope.go:117] "RemoveContainer" containerID="dbbd2c500bde0b5edfe0e51886ab9f90019a368b7c7ed35506db83c59a718631"
	Nov 26 19:36:56 addons-368879 kubelet[1272]: I1126 19:36:56.943336    1272 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Nov 26 19:36:56 addons-368879 kubelet[1272]: I1126 19:36:56.943379    1272 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Nov 26 19:36:57 addons-368879 kubelet[1272]: I1126 19:36:57.176810    1272 scope.go:117] "RemoveContainer" containerID="dbbd2c500bde0b5edfe0e51886ab9f90019a368b7c7ed35506db83c59a718631"
	Nov 26 19:36:57 addons-368879 kubelet[1272]: I1126 19:36:57.222990    1272 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-277vt" podStartSLOduration=66.456121294 podStartE2EDuration="1m8.222970285s" podCreationTimestamp="2025-11-26 19:35:49 +0000 UTC" firstStartedPulling="2025-11-26 19:36:55.086451442 +0000 UTC m=+79.271791074" lastFinishedPulling="2025-11-26 19:36:56.853300433 +0000 UTC m=+81.038640065" observedRunningTime="2025-11-26 19:36:57.199780815 +0000 UTC m=+81.385120454" watchObservedRunningTime="2025-11-26 19:36:57.222970285 +0000 UTC m=+81.408309933"
	Nov 26 19:36:57 addons-368879 kubelet[1272]: I1126 19:36:57.550990    1272 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwqm8\" (UniqueName: \"kubernetes.io/projected/6f8b417b-f2f3-437f-8089-37b22e9c8cfd-kube-api-access-dwqm8\") pod \"6f8b417b-f2f3-437f-8089-37b22e9c8cfd\" (UID: \"6f8b417b-f2f3-437f-8089-37b22e9c8cfd\") "
	Nov 26 19:36:57 addons-368879 kubelet[1272]: I1126 19:36:57.553685    1272 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f8b417b-f2f3-437f-8089-37b22e9c8cfd-kube-api-access-dwqm8" (OuterVolumeSpecName: "kube-api-access-dwqm8") pod "6f8b417b-f2f3-437f-8089-37b22e9c8cfd" (UID: "6f8b417b-f2f3-437f-8089-37b22e9c8cfd"). InnerVolumeSpecName "kube-api-access-dwqm8". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 26 19:36:57 addons-368879 kubelet[1272]: I1126 19:36:57.652110    1272 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dwqm8\" (UniqueName: \"kubernetes.io/projected/6f8b417b-f2f3-437f-8089-37b22e9c8cfd-kube-api-access-dwqm8\") on node \"addons-368879\" DevicePath \"\""
	Nov 26 19:36:58 addons-368879 kubelet[1272]: I1126 19:36:58.191830    1272 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2bc3d288cb5f86cc957381faf5f1387254702f34850b9b1f4d828ceb97f98152"
	Nov 26 19:36:58 addons-368879 kubelet[1272]: I1126 19:36:58.859652    1272 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwl8w\" (UniqueName: \"kubernetes.io/projected/026e0a5c-69e2-46ed-9b4b-7183ac9ba5e3-kube-api-access-qwl8w\") pod \"026e0a5c-69e2-46ed-9b4b-7183ac9ba5e3\" (UID: \"026e0a5c-69e2-46ed-9b4b-7183ac9ba5e3\") "
	Nov 26 19:36:58 addons-368879 kubelet[1272]: I1126 19:36:58.861929    1272 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/026e0a5c-69e2-46ed-9b4b-7183ac9ba5e3-kube-api-access-qwl8w" (OuterVolumeSpecName: "kube-api-access-qwl8w") pod "026e0a5c-69e2-46ed-9b4b-7183ac9ba5e3" (UID: "026e0a5c-69e2-46ed-9b4b-7183ac9ba5e3"). InnerVolumeSpecName "kube-api-access-qwl8w". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 26 19:36:58 addons-368879 kubelet[1272]: I1126 19:36:58.960858    1272 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qwl8w\" (UniqueName: \"kubernetes.io/projected/026e0a5c-69e2-46ed-9b4b-7183ac9ba5e3-kube-api-access-qwl8w\") on node \"addons-368879\" DevicePath \"\""
	Nov 26 19:36:59 addons-368879 kubelet[1272]: I1126 19:36:59.196608    1272 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7549b111a1684319ca7b46dcbb7d524685fce722faf4e1bc824b9c8e3817dadd"
	Nov 26 19:37:01 addons-368879 kubelet[1272]: I1126 19:37:01.213212    1272 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-6c8bf45fb-f6sg8" podStartSLOduration=72.686892254 podStartE2EDuration="1m18.213193558s" podCreationTimestamp="2025-11-26 19:35:43 +0000 UTC" firstStartedPulling="2025-11-26 19:36:55.08970383 +0000 UTC m=+79.275043447" lastFinishedPulling="2025-11-26 19:37:00.61600513 +0000 UTC m=+84.801344751" observedRunningTime="2025-11-26 19:37:01.212379814 +0000 UTC m=+85.397719455" watchObservedRunningTime="2025-11-26 19:37:01.213193558 +0000 UTC m=+85.398533197"
	Nov 26 19:37:04 addons-368879 kubelet[1272]: I1126 19:37:04.235540    1272 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-4cdfn" podStartSLOduration=1.945261584 podStartE2EDuration="42.235520759s" podCreationTimestamp="2025-11-26 19:36:22 +0000 UTC" firstStartedPulling="2025-11-26 19:36:23.340735445 +0000 UTC m=+47.526075067" lastFinishedPulling="2025-11-26 19:37:03.63099461 +0000 UTC m=+87.816334242" observedRunningTime="2025-11-26 19:37:04.234643065 +0000 UTC m=+88.419982705" watchObservedRunningTime="2025-11-26 19:37:04.235520759 +0000 UTC m=+88.420860398"
	Nov 26 19:37:06 addons-368879 kubelet[1272]: I1126 19:37:06.823601    1272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lqkb\" (UniqueName: \"kubernetes.io/projected/d5bca682-20c4-4eb9-91cc-2278bde34e49-kube-api-access-2lqkb\") pod \"busybox\" (UID: \"d5bca682-20c4-4eb9-91cc-2278bde34e49\") " pod="default/busybox"
	Nov 26 19:37:06 addons-368879 kubelet[1272]: I1126 19:37:06.823651    1272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/d5bca682-20c4-4eb9-91cc-2278bde34e49-gcp-creds\") pod \"busybox\" (UID: \"d5bca682-20c4-4eb9-91cc-2278bde34e49\") " pod="default/busybox"
	Nov 26 19:37:08 addons-368879 kubelet[1272]: I1126 19:37:08.252855    1272 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.510461264 podStartE2EDuration="2.25283881s" podCreationTimestamp="2025-11-26 19:37:06 +0000 UTC" firstStartedPulling="2025-11-26 19:37:07.022329057 +0000 UTC m=+91.207668674" lastFinishedPulling="2025-11-26 19:37:07.76470659 +0000 UTC m=+91.950046220" observedRunningTime="2025-11-26 19:37:08.251023202 +0000 UTC m=+92.436362841" watchObservedRunningTime="2025-11-26 19:37:08.25283881 +0000 UTC m=+92.438178448"
	Nov 26 19:37:15 addons-368879 kubelet[1272]: I1126 19:37:15.685243    1272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/05e3f0cf-f584-45a4-8207-05bff33cd676-gcp-creds\") pod \"nginx\" (UID: \"05e3f0cf-f584-45a4-8207-05bff33cd676\") " pod="default/nginx"
	Nov 26 19:37:15 addons-368879 kubelet[1272]: I1126 19:37:15.685308    1272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q66hz\" (UniqueName: \"kubernetes.io/projected/05e3f0cf-f584-45a4-8207-05bff33cd676-kube-api-access-q66hz\") pod \"nginx\" (UID: \"05e3f0cf-f584-45a4-8207-05bff33cd676\") " pod="default/nginx"
	
	
	==> storage-provisioner [25e48df5dfb4bded98bc722a69d41cfb23ba0789fcd742e1bbb988ed73575038] <==
	W1126 19:36:51.541690       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:36:53.544498       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:36:53.548430       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:36:55.551587       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:36:55.556305       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:36:57.561829       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:36:57.568852       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:36:59.572295       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:36:59.575981       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:37:01.579330       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:37:01.583005       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:37:03.585817       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:37:03.608287       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:37:05.611249       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:37:05.614553       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:37:07.617156       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:37:07.620574       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:37:09.623277       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:37:09.627715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:37:11.630579       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:37:11.634273       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:37:13.637241       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:37:13.640650       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:37:15.643353       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:37:15.647723       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-368879 -n addons-368879
helpers_test.go:269: (dbg) Run:  kubectl --context addons-368879 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx gcp-auth-certs-create-ljwzp gcp-auth-certs-patch-x56k5 ingress-nginx-admission-create-tbk6s ingress-nginx-admission-patch-8mvpf registry-creds-764b6fb674-rspjs
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-368879 describe pod nginx gcp-auth-certs-create-ljwzp gcp-auth-certs-patch-x56k5 ingress-nginx-admission-create-tbk6s ingress-nginx-admission-patch-8mvpf registry-creds-764b6fb674-rspjs
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-368879 describe pod nginx gcp-auth-certs-create-ljwzp gcp-auth-certs-patch-x56k5 ingress-nginx-admission-create-tbk6s ingress-nginx-admission-patch-8mvpf registry-creds-764b6fb674-rspjs: exit status 1 (69.911746ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-368879/192.168.49.2
	Start Time:       Wed, 26 Nov 2025 19:37:15 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q66hz (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-q66hz:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/nginx to addons-368879
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/nginx:alpine"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "gcp-auth-certs-create-ljwzp" not found
	Error from server (NotFound): pods "gcp-auth-certs-patch-x56k5" not found
	Error from server (NotFound): pods "ingress-nginx-admission-create-tbk6s" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-8mvpf" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-rspjs" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-368879 describe pod nginx gcp-auth-certs-create-ljwzp gcp-auth-certs-patch-x56k5 ingress-nginx-admission-create-tbk6s ingress-nginx-admission-patch-8mvpf registry-creds-764b6fb674-rspjs: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-368879 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-368879 addons disable headlamp --alsologtostderr -v=1: exit status 11 (242.821498ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1126 19:37:17.651138   24873 out.go:360] Setting OutFile to fd 1 ...
	I1126 19:37:17.651481   24873 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:37:17.651495   24873 out.go:374] Setting ErrFile to fd 2...
	I1126 19:37:17.651502   24873 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:37:17.651804   24873 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-10722/.minikube/bin
	I1126 19:37:17.652131   24873 mustload.go:66] Loading cluster: addons-368879
	I1126 19:37:17.652615   24873 config.go:182] Loaded profile config "addons-368879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:37:17.652645   24873 addons.go:622] checking whether the cluster is paused
	I1126 19:37:17.652774   24873 config.go:182] Loaded profile config "addons-368879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:37:17.652799   24873 host.go:66] Checking if "addons-368879" exists ...
	I1126 19:37:17.653318   24873 cli_runner.go:164] Run: docker container inspect addons-368879 --format={{.State.Status}}
	I1126 19:37:17.673444   24873 ssh_runner.go:195] Run: systemctl --version
	I1126 19:37:17.673530   24873 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-368879
	I1126 19:37:17.691400   24873 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/addons-368879/id_rsa Username:docker}
	I1126 19:37:17.787307   24873 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 19:37:17.787379   24873 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 19:37:17.814054   24873 cri.go:89] found id: "cff11b25559308e517bef61e7e64b2cf1d052d365ba751329767d36a353b778f"
	I1126 19:37:17.814072   24873 cri.go:89] found id: "212fd16c128eb496d22ee7050fd12a081b49ab88a5da4ad7ba0218f5c3a6afda"
	I1126 19:37:17.814077   24873 cri.go:89] found id: "9ad67d88c52b2a7f9372f1150442ba3a50bb8cf2680525e52a8619e4b2c8976b"
	I1126 19:37:17.814080   24873 cri.go:89] found id: "36ba93a3abe1946fafeb5c3585ccba2c0309402b73aa6b93fad1679ab9b8230c"
	I1126 19:37:17.814083   24873 cri.go:89] found id: "4197c12bab9b39cb37fcf577883d45acb6708a76bacaf452b81bbbac83ebc32d"
	I1126 19:37:17.814087   24873 cri.go:89] found id: "714962454fedbffb52b88389444757a4051639c470d89a7391a7d4f8507cd1ef"
	I1126 19:37:17.814092   24873 cri.go:89] found id: "603eac3a5db355f206ba3dc4ff6ed41cd4d2d57dd344b753ba2b220aea0ea935"
	I1126 19:37:17.814096   24873 cri.go:89] found id: "d97dad65e0c228f566bf65943a4547da6e571b1a6529c74a8ae4fca908aff071"
	I1126 19:37:17.814101   24873 cri.go:89] found id: "e825b2f37651c5a80788a75b90cfff51ac5bb5898b783c64714a77a867289740"
	I1126 19:37:17.814115   24873 cri.go:89] found id: "7dfc385a20d465a35afa574dc1af5a49eb3c8c1fa7ed51ee776a301a59c2361d"
	I1126 19:37:17.814123   24873 cri.go:89] found id: "47283ac77595bb9d7a2a45c36475aa798bdd174eb0fbf75c0f29ca447edce13c"
	I1126 19:37:17.814128   24873 cri.go:89] found id: "5a22bb7b95033629d4ca4f1e2bcfe31205813495c016ac6b92a54b3988f5aeb1"
	I1126 19:37:17.814133   24873 cri.go:89] found id: "efa536fb3d778fa3f66c962d9c414995c88acf53c34702183c66e5f69d36811f"
	I1126 19:37:17.814138   24873 cri.go:89] found id: "65470a1503151ad0a4d04f74207c75c03e574621ecbe88aa80a882447651e9dd"
	I1126 19:37:17.814142   24873 cri.go:89] found id: "eb50e6ab9debfa0a5504acf11c706fb150d3db77f0f345b4ad749fda6185bc6e"
	I1126 19:37:17.814152   24873 cri.go:89] found id: "719640c6c4cf6714ce19314de9295329020ce388e99867709ed34e6066f0a048"
	I1126 19:37:17.814159   24873 cri.go:89] found id: "d64fe5dcd9941dd86a15e1a24ecd4bcd3f1c12d60fa89d08729cb02ae1f3a859"
	I1126 19:37:17.814163   24873 cri.go:89] found id: "25e48df5dfb4bded98bc722a69d41cfb23ba0789fcd742e1bbb988ed73575038"
	I1126 19:37:17.814166   24873 cri.go:89] found id: "59ceeea3b62d8764e5784e2b3a3de748607f552e64062869f959f25d6d89b19c"
	I1126 19:37:17.814169   24873 cri.go:89] found id: "c71770537fdbd023344c0899f0ec3b332c10bdffa104f53bcb64a05e3456fdff"
	I1126 19:37:17.814172   24873 cri.go:89] found id: "6d9b40c465aff763d4d0644216d13f8019e5ff223b6cd4d430832c2712521310"
	I1126 19:37:17.814174   24873 cri.go:89] found id: "00f8c7ca3495a11998b111688472e6da4d0e62908b4fe1c119e083354e98f6e7"
	I1126 19:37:17.814177   24873 cri.go:89] found id: "f7ace30aee7afbfe85c6ffe68e9b951559babdf0fc3cfb0b08e0ecaba1352160"
	I1126 19:37:17.814179   24873 cri.go:89] found id: "beecb43fac96b0dc52fe3fc4694c61a44626abc7a7d215655c08387e375b3e65"
	I1126 19:37:17.814182   24873 cri.go:89] found id: ""
	I1126 19:37:17.814230   24873 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 19:37:17.826824   24873 out.go:203] 
	W1126 19:37:17.828100   24873 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:37:17Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:37:17Z" level=error msg="open /run/runc: no such file or directory"
	
	W1126 19:37:17.828126   24873 out.go:285] * 
	* 
	W1126 19:37:17.831308   24873 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1126 19:37:17.832542   24873 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-368879 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.62s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.24s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-wqlhm" [8ce6f822-1dda-42d9-8ddb-f529e2cd2302] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003053559s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-368879 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-368879 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (232.844339ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1126 19:37:33.806164   26183 out.go:360] Setting OutFile to fd 1 ...
	I1126 19:37:33.806433   26183 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:37:33.806442   26183 out.go:374] Setting ErrFile to fd 2...
	I1126 19:37:33.806447   26183 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:37:33.806646   26183 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-10722/.minikube/bin
	I1126 19:37:33.806861   26183 mustload.go:66] Loading cluster: addons-368879
	I1126 19:37:33.807147   26183 config.go:182] Loaded profile config "addons-368879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:37:33.807165   26183 addons.go:622] checking whether the cluster is paused
	I1126 19:37:33.807240   26183 config.go:182] Loaded profile config "addons-368879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:37:33.807259   26183 host.go:66] Checking if "addons-368879" exists ...
	I1126 19:37:33.807627   26183 cli_runner.go:164] Run: docker container inspect addons-368879 --format={{.State.Status}}
	I1126 19:37:33.824533   26183 ssh_runner.go:195] Run: systemctl --version
	I1126 19:37:33.824842   26183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-368879
	I1126 19:37:33.841899   26183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/addons-368879/id_rsa Username:docker}
	I1126 19:37:33.938626   26183 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 19:37:33.938686   26183 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 19:37:33.965661   26183 cri.go:89] found id: "cff11b25559308e517bef61e7e64b2cf1d052d365ba751329767d36a353b778f"
	I1126 19:37:33.965683   26183 cri.go:89] found id: "212fd16c128eb496d22ee7050fd12a081b49ab88a5da4ad7ba0218f5c3a6afda"
	I1126 19:37:33.965688   26183 cri.go:89] found id: "9ad67d88c52b2a7f9372f1150442ba3a50bb8cf2680525e52a8619e4b2c8976b"
	I1126 19:37:33.965691   26183 cri.go:89] found id: "36ba93a3abe1946fafeb5c3585ccba2c0309402b73aa6b93fad1679ab9b8230c"
	I1126 19:37:33.965694   26183 cri.go:89] found id: "4197c12bab9b39cb37fcf577883d45acb6708a76bacaf452b81bbbac83ebc32d"
	I1126 19:37:33.965698   26183 cri.go:89] found id: "714962454fedbffb52b88389444757a4051639c470d89a7391a7d4f8507cd1ef"
	I1126 19:37:33.965702   26183 cri.go:89] found id: "603eac3a5db355f206ba3dc4ff6ed41cd4d2d57dd344b753ba2b220aea0ea935"
	I1126 19:37:33.965705   26183 cri.go:89] found id: "d97dad65e0c228f566bf65943a4547da6e571b1a6529c74a8ae4fca908aff071"
	I1126 19:37:33.965708   26183 cri.go:89] found id: "e825b2f37651c5a80788a75b90cfff51ac5bb5898b783c64714a77a867289740"
	I1126 19:37:33.965715   26183 cri.go:89] found id: "7dfc385a20d465a35afa574dc1af5a49eb3c8c1fa7ed51ee776a301a59c2361d"
	I1126 19:37:33.965720   26183 cri.go:89] found id: "47283ac77595bb9d7a2a45c36475aa798bdd174eb0fbf75c0f29ca447edce13c"
	I1126 19:37:33.965724   26183 cri.go:89] found id: "5a22bb7b95033629d4ca4f1e2bcfe31205813495c016ac6b92a54b3988f5aeb1"
	I1126 19:37:33.965729   26183 cri.go:89] found id: "efa536fb3d778fa3f66c962d9c414995c88acf53c34702183c66e5f69d36811f"
	I1126 19:37:33.965734   26183 cri.go:89] found id: "65470a1503151ad0a4d04f74207c75c03e574621ecbe88aa80a882447651e9dd"
	I1126 19:37:33.965740   26183 cri.go:89] found id: "eb50e6ab9debfa0a5504acf11c706fb150d3db77f0f345b4ad749fda6185bc6e"
	I1126 19:37:33.965747   26183 cri.go:89] found id: "719640c6c4cf6714ce19314de9295329020ce388e99867709ed34e6066f0a048"
	I1126 19:37:33.965753   26183 cri.go:89] found id: "d64fe5dcd9941dd86a15e1a24ecd4bcd3f1c12d60fa89d08729cb02ae1f3a859"
	I1126 19:37:33.965759   26183 cri.go:89] found id: "25e48df5dfb4bded98bc722a69d41cfb23ba0789fcd742e1bbb988ed73575038"
	I1126 19:37:33.965763   26183 cri.go:89] found id: "59ceeea3b62d8764e5784e2b3a3de748607f552e64062869f959f25d6d89b19c"
	I1126 19:37:33.965768   26183 cri.go:89] found id: "c71770537fdbd023344c0899f0ec3b332c10bdffa104f53bcb64a05e3456fdff"
	I1126 19:37:33.965776   26183 cri.go:89] found id: "6d9b40c465aff763d4d0644216d13f8019e5ff223b6cd4d430832c2712521310"
	I1126 19:37:33.965779   26183 cri.go:89] found id: "00f8c7ca3495a11998b111688472e6da4d0e62908b4fe1c119e083354e98f6e7"
	I1126 19:37:33.965782   26183 cri.go:89] found id: "f7ace30aee7afbfe85c6ffe68e9b951559babdf0fc3cfb0b08e0ecaba1352160"
	I1126 19:37:33.965785   26183 cri.go:89] found id: "beecb43fac96b0dc52fe3fc4694c61a44626abc7a7d215655c08387e375b3e65"
	I1126 19:37:33.965787   26183 cri.go:89] found id: ""
	I1126 19:37:33.965838   26183 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 19:37:33.979025   26183 out.go:203] 
	W1126 19:37:33.980037   26183 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:37:33Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:37:33Z" level=error msg="open /run/runc: no such file or directory"
	
	W1126 19:37:33.980054   26183 out.go:285] * 
	* 
	W1126 19:37:33.983198   26183 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1126 19:37:33.984346   26183 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-368879 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.24s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (9.07s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-368879 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-368879 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-368879 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-368879 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-368879 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-368879 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-368879 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-368879 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [d2687ef3-8266-4bd4-9bbd-e3adac65cb7e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [d2687ef3-8266-4bd4-9bbd-e3adac65cb7e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [d2687ef3-8266-4bd4-9bbd-e3adac65cb7e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.002538806s
addons_test.go:967: (dbg) Run:  kubectl --context addons-368879 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-368879 ssh "cat /opt/local-path-provisioner/pvc-73e84fea-39e5-4ca4-a00c-b412c775b12a_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-368879 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-368879 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-368879 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-368879 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (247.800154ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1126 19:37:35.631268   26589 out.go:360] Setting OutFile to fd 1 ...
	I1126 19:37:35.631614   26589 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:37:35.631627   26589 out.go:374] Setting ErrFile to fd 2...
	I1126 19:37:35.631633   26589 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:37:35.631914   26589 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-10722/.minikube/bin
	I1126 19:37:35.632230   26589 mustload.go:66] Loading cluster: addons-368879
	I1126 19:37:35.632664   26589 config.go:182] Loaded profile config "addons-368879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:37:35.632687   26589 addons.go:622] checking whether the cluster is paused
	I1126 19:37:35.632805   26589 config.go:182] Loaded profile config "addons-368879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:37:35.632823   26589 host.go:66] Checking if "addons-368879" exists ...
	I1126 19:37:35.633330   26589 cli_runner.go:164] Run: docker container inspect addons-368879 --format={{.State.Status}}
	I1126 19:37:35.652009   26589 ssh_runner.go:195] Run: systemctl --version
	I1126 19:37:35.652055   26589 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-368879
	I1126 19:37:35.671231   26589 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/addons-368879/id_rsa Username:docker}
	I1126 19:37:35.770299   26589 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 19:37:35.770374   26589 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 19:37:35.797161   26589 cri.go:89] found id: "cff11b25559308e517bef61e7e64b2cf1d052d365ba751329767d36a353b778f"
	I1126 19:37:35.797178   26589 cri.go:89] found id: "212fd16c128eb496d22ee7050fd12a081b49ab88a5da4ad7ba0218f5c3a6afda"
	I1126 19:37:35.797183   26589 cri.go:89] found id: "9ad67d88c52b2a7f9372f1150442ba3a50bb8cf2680525e52a8619e4b2c8976b"
	I1126 19:37:35.797187   26589 cri.go:89] found id: "36ba93a3abe1946fafeb5c3585ccba2c0309402b73aa6b93fad1679ab9b8230c"
	I1126 19:37:35.797192   26589 cri.go:89] found id: "4197c12bab9b39cb37fcf577883d45acb6708a76bacaf452b81bbbac83ebc32d"
	I1126 19:37:35.797197   26589 cri.go:89] found id: "714962454fedbffb52b88389444757a4051639c470d89a7391a7d4f8507cd1ef"
	I1126 19:37:35.797201   26589 cri.go:89] found id: "603eac3a5db355f206ba3dc4ff6ed41cd4d2d57dd344b753ba2b220aea0ea935"
	I1126 19:37:35.797205   26589 cri.go:89] found id: "d97dad65e0c228f566bf65943a4547da6e571b1a6529c74a8ae4fca908aff071"
	I1126 19:37:35.797209   26589 cri.go:89] found id: "e825b2f37651c5a80788a75b90cfff51ac5bb5898b783c64714a77a867289740"
	I1126 19:37:35.797216   26589 cri.go:89] found id: "7dfc385a20d465a35afa574dc1af5a49eb3c8c1fa7ed51ee776a301a59c2361d"
	I1126 19:37:35.797221   26589 cri.go:89] found id: "47283ac77595bb9d7a2a45c36475aa798bdd174eb0fbf75c0f29ca447edce13c"
	I1126 19:37:35.797225   26589 cri.go:89] found id: "5a22bb7b95033629d4ca4f1e2bcfe31205813495c016ac6b92a54b3988f5aeb1"
	I1126 19:37:35.797228   26589 cri.go:89] found id: "efa536fb3d778fa3f66c962d9c414995c88acf53c34702183c66e5f69d36811f"
	I1126 19:37:35.797231   26589 cri.go:89] found id: "65470a1503151ad0a4d04f74207c75c03e574621ecbe88aa80a882447651e9dd"
	I1126 19:37:35.797234   26589 cri.go:89] found id: "eb50e6ab9debfa0a5504acf11c706fb150d3db77f0f345b4ad749fda6185bc6e"
	I1126 19:37:35.797247   26589 cri.go:89] found id: "719640c6c4cf6714ce19314de9295329020ce388e99867709ed34e6066f0a048"
	I1126 19:37:35.797255   26589 cri.go:89] found id: "d64fe5dcd9941dd86a15e1a24ecd4bcd3f1c12d60fa89d08729cb02ae1f3a859"
	I1126 19:37:35.797260   26589 cri.go:89] found id: "25e48df5dfb4bded98bc722a69d41cfb23ba0789fcd742e1bbb988ed73575038"
	I1126 19:37:35.797263   26589 cri.go:89] found id: "59ceeea3b62d8764e5784e2b3a3de748607f552e64062869f959f25d6d89b19c"
	I1126 19:37:35.797267   26589 cri.go:89] found id: "c71770537fdbd023344c0899f0ec3b332c10bdffa104f53bcb64a05e3456fdff"
	I1126 19:37:35.797271   26589 cri.go:89] found id: "6d9b40c465aff763d4d0644216d13f8019e5ff223b6cd4d430832c2712521310"
	I1126 19:37:35.797278   26589 cri.go:89] found id: "00f8c7ca3495a11998b111688472e6da4d0e62908b4fe1c119e083354e98f6e7"
	I1126 19:37:35.797281   26589 cri.go:89] found id: "f7ace30aee7afbfe85c6ffe68e9b951559babdf0fc3cfb0b08e0ecaba1352160"
	I1126 19:37:35.797288   26589 cri.go:89] found id: "beecb43fac96b0dc52fe3fc4694c61a44626abc7a7d215655c08387e375b3e65"
	I1126 19:37:35.797293   26589 cri.go:89] found id: ""
	I1126 19:37:35.797336   26589 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 19:37:35.810765   26589 out.go:203] 
	W1126 19:37:35.811861   26589 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:37:35Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:37:35Z" level=error msg="open /run/runc: no such file or directory"
	
	W1126 19:37:35.811885   26589 out.go:285] * 
	* 
	W1126 19:37:35.816735   26589 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1126 19:37:35.818009   26589 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-368879 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (9.07s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.24s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-jr6zz" [3a661477-c629-4de7-b047-25a87d532f45] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003281808s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-368879 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-368879 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (232.69982ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1126 19:37:29.136643   25979 out.go:360] Setting OutFile to fd 1 ...
	I1126 19:37:29.136885   25979 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:37:29.136893   25979 out.go:374] Setting ErrFile to fd 2...
	I1126 19:37:29.136898   25979 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:37:29.137113   25979 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-10722/.minikube/bin
	I1126 19:37:29.137333   25979 mustload.go:66] Loading cluster: addons-368879
	I1126 19:37:29.137655   25979 config.go:182] Loaded profile config "addons-368879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:37:29.137674   25979 addons.go:622] checking whether the cluster is paused
	I1126 19:37:29.137755   25979 config.go:182] Loaded profile config "addons-368879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:37:29.137769   25979 host.go:66] Checking if "addons-368879" exists ...
	I1126 19:37:29.138128   25979 cli_runner.go:164] Run: docker container inspect addons-368879 --format={{.State.Status}}
	I1126 19:37:29.154800   25979 ssh_runner.go:195] Run: systemctl --version
	I1126 19:37:29.154854   25979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-368879
	I1126 19:37:29.171514   25979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/addons-368879/id_rsa Username:docker}
	I1126 19:37:29.267406   25979 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 19:37:29.267518   25979 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 19:37:29.294965   25979 cri.go:89] found id: "cff11b25559308e517bef61e7e64b2cf1d052d365ba751329767d36a353b778f"
	I1126 19:37:29.294987   25979 cri.go:89] found id: "212fd16c128eb496d22ee7050fd12a081b49ab88a5da4ad7ba0218f5c3a6afda"
	I1126 19:37:29.294995   25979 cri.go:89] found id: "9ad67d88c52b2a7f9372f1150442ba3a50bb8cf2680525e52a8619e4b2c8976b"
	I1126 19:37:29.295000   25979 cri.go:89] found id: "36ba93a3abe1946fafeb5c3585ccba2c0309402b73aa6b93fad1679ab9b8230c"
	I1126 19:37:29.295005   25979 cri.go:89] found id: "4197c12bab9b39cb37fcf577883d45acb6708a76bacaf452b81bbbac83ebc32d"
	I1126 19:37:29.295011   25979 cri.go:89] found id: "714962454fedbffb52b88389444757a4051639c470d89a7391a7d4f8507cd1ef"
	I1126 19:37:29.295016   25979 cri.go:89] found id: "603eac3a5db355f206ba3dc4ff6ed41cd4d2d57dd344b753ba2b220aea0ea935"
	I1126 19:37:29.295021   25979 cri.go:89] found id: "d97dad65e0c228f566bf65943a4547da6e571b1a6529c74a8ae4fca908aff071"
	I1126 19:37:29.295027   25979 cri.go:89] found id: "e825b2f37651c5a80788a75b90cfff51ac5bb5898b783c64714a77a867289740"
	I1126 19:37:29.295042   25979 cri.go:89] found id: "7dfc385a20d465a35afa574dc1af5a49eb3c8c1fa7ed51ee776a301a59c2361d"
	I1126 19:37:29.295050   25979 cri.go:89] found id: "47283ac77595bb9d7a2a45c36475aa798bdd174eb0fbf75c0f29ca447edce13c"
	I1126 19:37:29.295053   25979 cri.go:89] found id: "5a22bb7b95033629d4ca4f1e2bcfe31205813495c016ac6b92a54b3988f5aeb1"
	I1126 19:37:29.295055   25979 cri.go:89] found id: "efa536fb3d778fa3f66c962d9c414995c88acf53c34702183c66e5f69d36811f"
	I1126 19:37:29.295059   25979 cri.go:89] found id: "65470a1503151ad0a4d04f74207c75c03e574621ecbe88aa80a882447651e9dd"
	I1126 19:37:29.295061   25979 cri.go:89] found id: "eb50e6ab9debfa0a5504acf11c706fb150d3db77f0f345b4ad749fda6185bc6e"
	I1126 19:37:29.295069   25979 cri.go:89] found id: "719640c6c4cf6714ce19314de9295329020ce388e99867709ed34e6066f0a048"
	I1126 19:37:29.295074   25979 cri.go:89] found id: "d64fe5dcd9941dd86a15e1a24ecd4bcd3f1c12d60fa89d08729cb02ae1f3a859"
	I1126 19:37:29.295078   25979 cri.go:89] found id: "25e48df5dfb4bded98bc722a69d41cfb23ba0789fcd742e1bbb988ed73575038"
	I1126 19:37:29.295082   25979 cri.go:89] found id: "59ceeea3b62d8764e5784e2b3a3de748607f552e64062869f959f25d6d89b19c"
	I1126 19:37:29.295085   25979 cri.go:89] found id: "c71770537fdbd023344c0899f0ec3b332c10bdffa104f53bcb64a05e3456fdff"
	I1126 19:37:29.295088   25979 cri.go:89] found id: "6d9b40c465aff763d4d0644216d13f8019e5ff223b6cd4d430832c2712521310"
	I1126 19:37:29.295090   25979 cri.go:89] found id: "00f8c7ca3495a11998b111688472e6da4d0e62908b4fe1c119e083354e98f6e7"
	I1126 19:37:29.295093   25979 cri.go:89] found id: "f7ace30aee7afbfe85c6ffe68e9b951559babdf0fc3cfb0b08e0ecaba1352160"
	I1126 19:37:29.295096   25979 cri.go:89] found id: "beecb43fac96b0dc52fe3fc4694c61a44626abc7a7d215655c08387e375b3e65"
	I1126 19:37:29.295099   25979 cri.go:89] found id: ""
	I1126 19:37:29.295131   25979 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 19:37:29.307731   25979 out.go:203] 
	W1126 19:37:29.308925   25979 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:37:29Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:37:29Z" level=error msg="open /run/runc: no such file or directory"
	
	W1126 19:37:29.308944   25979 out.go:285] * 
	* 
	W1126 19:37:29.312234   25979 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1126 19:37:29.313249   25979 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-368879 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.24s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.23s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-zkdvc" [1b924fae-5849-402e-818c-3e4c96669528] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.00318738s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-368879 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-368879 addons disable yakd --alsologtostderr -v=1: exit status 11 (230.09718ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1126 19:37:26.578669   25613 out.go:360] Setting OutFile to fd 1 ...
	I1126 19:37:26.578928   25613 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:37:26.578936   25613 out.go:374] Setting ErrFile to fd 2...
	I1126 19:37:26.578940   25613 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:37:26.579100   25613 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-10722/.minikube/bin
	I1126 19:37:26.579326   25613 mustload.go:66] Loading cluster: addons-368879
	I1126 19:37:26.579615   25613 config.go:182] Loaded profile config "addons-368879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:37:26.579631   25613 addons.go:622] checking whether the cluster is paused
	I1126 19:37:26.579702   25613 config.go:182] Loaded profile config "addons-368879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:37:26.579715   25613 host.go:66] Checking if "addons-368879" exists ...
	I1126 19:37:26.580081   25613 cli_runner.go:164] Run: docker container inspect addons-368879 --format={{.State.Status}}
	I1126 19:37:26.597116   25613 ssh_runner.go:195] Run: systemctl --version
	I1126 19:37:26.597154   25613 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-368879
	I1126 19:37:26.613419   25613 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/addons-368879/id_rsa Username:docker}
	I1126 19:37:26.709343   25613 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 19:37:26.709414   25613 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 19:37:26.736279   25613 cri.go:89] found id: "cff11b25559308e517bef61e7e64b2cf1d052d365ba751329767d36a353b778f"
	I1126 19:37:26.736298   25613 cri.go:89] found id: "212fd16c128eb496d22ee7050fd12a081b49ab88a5da4ad7ba0218f5c3a6afda"
	I1126 19:37:26.736304   25613 cri.go:89] found id: "9ad67d88c52b2a7f9372f1150442ba3a50bb8cf2680525e52a8619e4b2c8976b"
	I1126 19:37:26.736309   25613 cri.go:89] found id: "36ba93a3abe1946fafeb5c3585ccba2c0309402b73aa6b93fad1679ab9b8230c"
	I1126 19:37:26.736313   25613 cri.go:89] found id: "4197c12bab9b39cb37fcf577883d45acb6708a76bacaf452b81bbbac83ebc32d"
	I1126 19:37:26.736318   25613 cri.go:89] found id: "714962454fedbffb52b88389444757a4051639c470d89a7391a7d4f8507cd1ef"
	I1126 19:37:26.736323   25613 cri.go:89] found id: "603eac3a5db355f206ba3dc4ff6ed41cd4d2d57dd344b753ba2b220aea0ea935"
	I1126 19:37:26.736328   25613 cri.go:89] found id: "d97dad65e0c228f566bf65943a4547da6e571b1a6529c74a8ae4fca908aff071"
	I1126 19:37:26.736332   25613 cri.go:89] found id: "e825b2f37651c5a80788a75b90cfff51ac5bb5898b783c64714a77a867289740"
	I1126 19:37:26.736339   25613 cri.go:89] found id: "7dfc385a20d465a35afa574dc1af5a49eb3c8c1fa7ed51ee776a301a59c2361d"
	I1126 19:37:26.736344   25613 cri.go:89] found id: "47283ac77595bb9d7a2a45c36475aa798bdd174eb0fbf75c0f29ca447edce13c"
	I1126 19:37:26.736349   25613 cri.go:89] found id: "5a22bb7b95033629d4ca4f1e2bcfe31205813495c016ac6b92a54b3988f5aeb1"
	I1126 19:37:26.736357   25613 cri.go:89] found id: "efa536fb3d778fa3f66c962d9c414995c88acf53c34702183c66e5f69d36811f"
	I1126 19:37:26.736362   25613 cri.go:89] found id: "65470a1503151ad0a4d04f74207c75c03e574621ecbe88aa80a882447651e9dd"
	I1126 19:37:26.736367   25613 cri.go:89] found id: "eb50e6ab9debfa0a5504acf11c706fb150d3db77f0f345b4ad749fda6185bc6e"
	I1126 19:37:26.736383   25613 cri.go:89] found id: "719640c6c4cf6714ce19314de9295329020ce388e99867709ed34e6066f0a048"
	I1126 19:37:26.736391   25613 cri.go:89] found id: "d64fe5dcd9941dd86a15e1a24ecd4bcd3f1c12d60fa89d08729cb02ae1f3a859"
	I1126 19:37:26.736397   25613 cri.go:89] found id: "25e48df5dfb4bded98bc722a69d41cfb23ba0789fcd742e1bbb988ed73575038"
	I1126 19:37:26.736402   25613 cri.go:89] found id: "59ceeea3b62d8764e5784e2b3a3de748607f552e64062869f959f25d6d89b19c"
	I1126 19:37:26.736406   25613 cri.go:89] found id: "c71770537fdbd023344c0899f0ec3b332c10bdffa104f53bcb64a05e3456fdff"
	I1126 19:37:26.736412   25613 cri.go:89] found id: "6d9b40c465aff763d4d0644216d13f8019e5ff223b6cd4d430832c2712521310"
	I1126 19:37:26.736417   25613 cri.go:89] found id: "00f8c7ca3495a11998b111688472e6da4d0e62908b4fe1c119e083354e98f6e7"
	I1126 19:37:26.736426   25613 cri.go:89] found id: "f7ace30aee7afbfe85c6ffe68e9b951559babdf0fc3cfb0b08e0ecaba1352160"
	I1126 19:37:26.736431   25613 cri.go:89] found id: "beecb43fac96b0dc52fe3fc4694c61a44626abc7a7d215655c08387e375b3e65"
	I1126 19:37:26.736436   25613 cri.go:89] found id: ""
	I1126 19:37:26.736504   25613 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 19:37:26.749728   25613 out.go:203] 
	W1126 19:37:26.750910   25613 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:37:26Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:37:26Z" level=error msg="open /run/runc: no such file or directory"
	
	W1126 19:37:26.750925   25613 out.go:285] * 
	* 
	W1126 19:37:26.753886   25613 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1126 19:37:26.755192   25613 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-368879 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.23s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (6.24s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-gj5pg" [5041bbf6-8005-42bc-8769-90708d6711d1] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 6.003058976s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-368879 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-368879 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (238.64923ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1126 19:37:23.897288   25474 out.go:360] Setting OutFile to fd 1 ...
	I1126 19:37:23.897572   25474 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:37:23.897582   25474 out.go:374] Setting ErrFile to fd 2...
	I1126 19:37:23.897586   25474 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:37:23.897763   25474 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-10722/.minikube/bin
	I1126 19:37:23.897999   25474 mustload.go:66] Loading cluster: addons-368879
	I1126 19:37:23.898300   25474 config.go:182] Loaded profile config "addons-368879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:37:23.898317   25474 addons.go:622] checking whether the cluster is paused
	I1126 19:37:23.898394   25474 config.go:182] Loaded profile config "addons-368879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:37:23.898408   25474 host.go:66] Checking if "addons-368879" exists ...
	I1126 19:37:23.898797   25474 cli_runner.go:164] Run: docker container inspect addons-368879 --format={{.State.Status}}
	I1126 19:37:23.915786   25474 ssh_runner.go:195] Run: systemctl --version
	I1126 19:37:23.915835   25474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-368879
	I1126 19:37:23.933320   25474 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/addons-368879/id_rsa Username:docker}
	I1126 19:37:24.029358   25474 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 19:37:24.029430   25474 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 19:37:24.056778   25474 cri.go:89] found id: "cff11b25559308e517bef61e7e64b2cf1d052d365ba751329767d36a353b778f"
	I1126 19:37:24.056796   25474 cri.go:89] found id: "212fd16c128eb496d22ee7050fd12a081b49ab88a5da4ad7ba0218f5c3a6afda"
	I1126 19:37:24.056802   25474 cri.go:89] found id: "9ad67d88c52b2a7f9372f1150442ba3a50bb8cf2680525e52a8619e4b2c8976b"
	I1126 19:37:24.056806   25474 cri.go:89] found id: "36ba93a3abe1946fafeb5c3585ccba2c0309402b73aa6b93fad1679ab9b8230c"
	I1126 19:37:24.056810   25474 cri.go:89] found id: "4197c12bab9b39cb37fcf577883d45acb6708a76bacaf452b81bbbac83ebc32d"
	I1126 19:37:24.056815   25474 cri.go:89] found id: "714962454fedbffb52b88389444757a4051639c470d89a7391a7d4f8507cd1ef"
	I1126 19:37:24.056819   25474 cri.go:89] found id: "603eac3a5db355f206ba3dc4ff6ed41cd4d2d57dd344b753ba2b220aea0ea935"
	I1126 19:37:24.056824   25474 cri.go:89] found id: "d97dad65e0c228f566bf65943a4547da6e571b1a6529c74a8ae4fca908aff071"
	I1126 19:37:24.056827   25474 cri.go:89] found id: "e825b2f37651c5a80788a75b90cfff51ac5bb5898b783c64714a77a867289740"
	I1126 19:37:24.056841   25474 cri.go:89] found id: "7dfc385a20d465a35afa574dc1af5a49eb3c8c1fa7ed51ee776a301a59c2361d"
	I1126 19:37:24.056846   25474 cri.go:89] found id: "47283ac77595bb9d7a2a45c36475aa798bdd174eb0fbf75c0f29ca447edce13c"
	I1126 19:37:24.056850   25474 cri.go:89] found id: "5a22bb7b95033629d4ca4f1e2bcfe31205813495c016ac6b92a54b3988f5aeb1"
	I1126 19:37:24.056856   25474 cri.go:89] found id: "efa536fb3d778fa3f66c962d9c414995c88acf53c34702183c66e5f69d36811f"
	I1126 19:37:24.056860   25474 cri.go:89] found id: "65470a1503151ad0a4d04f74207c75c03e574621ecbe88aa80a882447651e9dd"
	I1126 19:37:24.056866   25474 cri.go:89] found id: "eb50e6ab9debfa0a5504acf11c706fb150d3db77f0f345b4ad749fda6185bc6e"
	I1126 19:37:24.056878   25474 cri.go:89] found id: "719640c6c4cf6714ce19314de9295329020ce388e99867709ed34e6066f0a048"
	I1126 19:37:24.056887   25474 cri.go:89] found id: "d64fe5dcd9941dd86a15e1a24ecd4bcd3f1c12d60fa89d08729cb02ae1f3a859"
	I1126 19:37:24.056893   25474 cri.go:89] found id: "25e48df5dfb4bded98bc722a69d41cfb23ba0789fcd742e1bbb988ed73575038"
	I1126 19:37:24.056898   25474 cri.go:89] found id: "59ceeea3b62d8764e5784e2b3a3de748607f552e64062869f959f25d6d89b19c"
	I1126 19:37:24.056932   25474 cri.go:89] found id: "c71770537fdbd023344c0899f0ec3b332c10bdffa104f53bcb64a05e3456fdff"
	I1126 19:37:24.056958   25474 cri.go:89] found id: "6d9b40c465aff763d4d0644216d13f8019e5ff223b6cd4d430832c2712521310"
	I1126 19:37:24.056964   25474 cri.go:89] found id: "00f8c7ca3495a11998b111688472e6da4d0e62908b4fe1c119e083354e98f6e7"
	I1126 19:37:24.056970   25474 cri.go:89] found id: "f7ace30aee7afbfe85c6ffe68e9b951559babdf0fc3cfb0b08e0ecaba1352160"
	I1126 19:37:24.056976   25474 cri.go:89] found id: "beecb43fac96b0dc52fe3fc4694c61a44626abc7a7d215655c08387e375b3e65"
	I1126 19:37:24.056982   25474 cri.go:89] found id: ""
	I1126 19:37:24.057030   25474 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 19:37:24.070923   25474 out.go:203] 
	W1126 19:37:24.072085   25474 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:37:24Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:37:24Z" level=error msg="open /run/runc: no such file or directory"
	
	W1126 19:37:24.072103   25474 out.go:285] * 
	* 
	W1126 19:37:24.075003   25474 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1126 19:37:24.076213   25474 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-368879 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (6.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (602.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-960066 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-960066 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-ffsgf" [33a96605-8568-4302-a7a6-3f4c314a18ac] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-960066 -n functional-960066
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-11-26 19:53:24.147294041 +0000 UTC m=+1117.072723629
functional_test.go:1645: (dbg) Run:  kubectl --context functional-960066 describe po hello-node-connect-7d85dfc575-ffsgf -n default
functional_test.go:1645: (dbg) kubectl --context functional-960066 describe po hello-node-connect-7d85dfc575-ffsgf -n default:
Name:             hello-node-connect-7d85dfc575-ffsgf
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-960066/192.168.49.2
Start Time:       Wed, 26 Nov 2025 19:43:23 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.10
IPs:
IP:           10.244.0.10
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rmjh4 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-rmjh4:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-ffsgf to functional-960066
Normal   Pulling    7m4s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m4s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m4s (x5 over 10m)    kubelet            Error: ErrImagePull
Warning  Failed     4m56s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m45s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-960066 logs hello-node-connect-7d85dfc575-ffsgf -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-960066 logs hello-node-connect-7d85dfc575-ffsgf -n default: exit status 1 (58.145834ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-ffsgf" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-960066 logs hello-node-connect-7d85dfc575-ffsgf -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-960066 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-ffsgf
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-960066/192.168.49.2
Start Time:       Wed, 26 Nov 2025 19:43:23 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.10
IPs:
IP:           10.244.0.10
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rmjh4 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-rmjh4:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-ffsgf to functional-960066
Normal   Pulling    7m4s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m4s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m4s (x5 over 10m)    kubelet            Error: ErrImagePull
Warning  Failed     4m56s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m45s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-960066 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-960066 logs -l app=hello-node-connect: exit status 1 (58.652705ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-ffsgf" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-960066 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-960066 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.101.227.178
IPs:                      10.101.227.178
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31474/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-960066
helpers_test.go:243: (dbg) docker inspect functional-960066:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3ba9191b8e191419f94e550676074d83845ba76f857f99037260987541c0473b",
	        "Created": "2025-11-26T19:41:08.279883914Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 37687,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-26T19:41:08.307925906Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/3ba9191b8e191419f94e550676074d83845ba76f857f99037260987541c0473b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3ba9191b8e191419f94e550676074d83845ba76f857f99037260987541c0473b/hostname",
	        "HostsPath": "/var/lib/docker/containers/3ba9191b8e191419f94e550676074d83845ba76f857f99037260987541c0473b/hosts",
	        "LogPath": "/var/lib/docker/containers/3ba9191b8e191419f94e550676074d83845ba76f857f99037260987541c0473b/3ba9191b8e191419f94e550676074d83845ba76f857f99037260987541c0473b-json.log",
	        "Name": "/functional-960066",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-960066:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-960066",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3ba9191b8e191419f94e550676074d83845ba76f857f99037260987541c0473b",
	                "LowerDir": "/var/lib/docker/overlay2/389f5bc0fb98c4f7e0a727d1393a278c6b8b91932f0c745700978748ef782ad5-init/diff:/var/lib/docker/overlay2/1661d775281cb7314a654558ee2c4e5880ffafb3a27a085032449cd60a68753a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/389f5bc0fb98c4f7e0a727d1393a278c6b8b91932f0c745700978748ef782ad5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/389f5bc0fb98c4f7e0a727d1393a278c6b8b91932f0c745700978748ef782ad5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/389f5bc0fb98c4f7e0a727d1393a278c6b8b91932f0c745700978748ef782ad5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-960066",
	                "Source": "/var/lib/docker/volumes/functional-960066/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-960066",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-960066",
	                "name.minikube.sigs.k8s.io": "functional-960066",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "325cbbe4e5bc4dba5918383e5cc7ce561d06bbd3b44043d45da65fce9a5ce8e1",
	            "SandboxKey": "/var/run/docker/netns/325cbbe4e5bc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-960066": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0dc1a18ae5a308b03835cb1b1a02225af87cada5dce8ea7eddd53feadb742ac6",
	                    "EndpointID": "d75801ea6ddbaa7f9b70a458805af6d5b57e000a0317cab5c07b4c94ac90c545",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "3e:ae:e2:ff:5e:88",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-960066",
	                        "3ba9191b8e19"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-960066 -n functional-960066
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-960066 logs -n 25: (1.158714371s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                  ARGS                                                  │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-960066 image save --daemon kicbase/echo-server:functional-960066 --alsologtostderr          │ functional-960066 │ jenkins │ v1.37.0 │ 26 Nov 25 19:43 UTC │ 26 Nov 25 19:43 UTC │
	│ addons         │ functional-960066 addons list                                                                          │ functional-960066 │ jenkins │ v1.37.0 │ 26 Nov 25 19:43 UTC │ 26 Nov 25 19:43 UTC │
	│ addons         │ functional-960066 addons list -o json                                                                  │ functional-960066 │ jenkins │ v1.37.0 │ 26 Nov 25 19:43 UTC │ 26 Nov 25 19:43 UTC │
	│ ssh            │ functional-960066 ssh sudo cat /etc/test/nested/copy/14258/hosts                                       │ functional-960066 │ jenkins │ v1.37.0 │ 26 Nov 25 19:43 UTC │ 26 Nov 25 19:43 UTC │
	│ ssh            │ functional-960066 ssh sudo cat /etc/ssl/certs/14258.pem                                                │ functional-960066 │ jenkins │ v1.37.0 │ 26 Nov 25 19:43 UTC │ 26 Nov 25 19:43 UTC │
	│ ssh            │ functional-960066 ssh sudo cat /usr/share/ca-certificates/14258.pem                                    │ functional-960066 │ jenkins │ v1.37.0 │ 26 Nov 25 19:43 UTC │ 26 Nov 25 19:43 UTC │
	│ ssh            │ functional-960066 ssh sudo cat /etc/ssl/certs/51391683.0                                               │ functional-960066 │ jenkins │ v1.37.0 │ 26 Nov 25 19:43 UTC │ 26 Nov 25 19:43 UTC │
	│ ssh            │ functional-960066 ssh sudo cat /etc/ssl/certs/142582.pem                                               │ functional-960066 │ jenkins │ v1.37.0 │ 26 Nov 25 19:43 UTC │ 26 Nov 25 19:43 UTC │
	│ ssh            │ functional-960066 ssh sudo cat /usr/share/ca-certificates/142582.pem                                   │ functional-960066 │ jenkins │ v1.37.0 │ 26 Nov 25 19:43 UTC │ 26 Nov 25 19:43 UTC │
	│ ssh            │ functional-960066 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                               │ functional-960066 │ jenkins │ v1.37.0 │ 26 Nov 25 19:43 UTC │ 26 Nov 25 19:43 UTC │
	│ image          │ functional-960066 image ls --format short --alsologtostderr                                            │ functional-960066 │ jenkins │ v1.37.0 │ 26 Nov 25 19:43 UTC │ 26 Nov 25 19:43 UTC │
	│ image          │ functional-960066 image ls --format yaml --alsologtostderr                                             │ functional-960066 │ jenkins │ v1.37.0 │ 26 Nov 25 19:43 UTC │ 26 Nov 25 19:43 UTC │
	│ ssh            │ functional-960066 ssh pgrep buildkitd                                                                  │ functional-960066 │ jenkins │ v1.37.0 │ 26 Nov 25 19:43 UTC │                     │
	│ image          │ functional-960066 image build -t localhost/my-image:functional-960066 testdata/build --alsologtostderr │ functional-960066 │ jenkins │ v1.37.0 │ 26 Nov 25 19:43 UTC │ 26 Nov 25 19:43 UTC │
	│ image          │ functional-960066 image ls                                                                             │ functional-960066 │ jenkins │ v1.37.0 │ 26 Nov 25 19:43 UTC │ 26 Nov 25 19:43 UTC │
	│ image          │ functional-960066 image ls --format json --alsologtostderr                                             │ functional-960066 │ jenkins │ v1.37.0 │ 26 Nov 25 19:43 UTC │ 26 Nov 25 19:43 UTC │
	│ image          │ functional-960066 image ls --format table --alsologtostderr                                            │ functional-960066 │ jenkins │ v1.37.0 │ 26 Nov 25 19:43 UTC │ 26 Nov 25 19:43 UTC │
	│ update-context │ functional-960066 update-context --alsologtostderr -v=2                                                │ functional-960066 │ jenkins │ v1.37.0 │ 26 Nov 25 19:43 UTC │ 26 Nov 25 19:43 UTC │
	│ update-context │ functional-960066 update-context --alsologtostderr -v=2                                                │ functional-960066 │ jenkins │ v1.37.0 │ 26 Nov 25 19:43 UTC │ 26 Nov 25 19:43 UTC │
	│ update-context │ functional-960066 update-context --alsologtostderr -v=2                                                │ functional-960066 │ jenkins │ v1.37.0 │ 26 Nov 25 19:43 UTC │ 26 Nov 25 19:43 UTC │
	│ service        │ functional-960066 service list                                                                         │ functional-960066 │ jenkins │ v1.37.0 │ 26 Nov 25 19:53 UTC │ 26 Nov 25 19:53 UTC │
	│ service        │ functional-960066 service list -o json                                                                 │ functional-960066 │ jenkins │ v1.37.0 │ 26 Nov 25 19:53 UTC │ 26 Nov 25 19:53 UTC │
	│ service        │ functional-960066 service --namespace=default --https --url hello-node                                 │ functional-960066 │ jenkins │ v1.37.0 │ 26 Nov 25 19:53 UTC │                     │
	│ service        │ functional-960066 service hello-node --url --format={{.IP}}                                            │ functional-960066 │ jenkins │ v1.37.0 │ 26 Nov 25 19:53 UTC │                     │
	│ service        │ functional-960066 service hello-node --url                                                             │ functional-960066 │ jenkins │ v1.37.0 │ 26 Nov 25 19:53 UTC │                     │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/26 19:43:08
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1126 19:43:08.782697   47678 out.go:360] Setting OutFile to fd 1 ...
	I1126 19:43:08.782908   47678 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:43:08.782916   47678 out.go:374] Setting ErrFile to fd 2...
	I1126 19:43:08.782920   47678 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:43:08.783127   47678 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-10722/.minikube/bin
	I1126 19:43:08.783552   47678 out.go:368] Setting JSON to false
	I1126 19:43:08.784384   47678 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1539,"bootTime":1764184650,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1126 19:43:08.784433   47678 start.go:143] virtualization: kvm guest
	I1126 19:43:08.785963   47678 out.go:179] * [functional-960066] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1126 19:43:08.787347   47678 notify.go:221] Checking for updates...
	I1126 19:43:08.787374   47678 out.go:179]   - MINIKUBE_LOCATION=21974
	I1126 19:43:08.788450   47678 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1126 19:43:08.789538   47678 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21974-10722/kubeconfig
	I1126 19:43:08.790647   47678 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-10722/.minikube
	I1126 19:43:08.791739   47678 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1126 19:43:08.792806   47678 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1126 19:43:08.794215   47678 config.go:182] Loaded profile config "functional-960066": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:43:08.794722   47678 driver.go:422] Setting default libvirt URI to qemu:///system
	I1126 19:43:08.817998   47678 docker.go:124] docker version: linux-29.0.4:Docker Engine - Community
	I1126 19:43:08.818067   47678 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 19:43:08.873622   47678 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-26 19:43:08.863694575 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1126 19:43:08.873723   47678 docker.go:319] overlay module found
	I1126 19:43:08.876294   47678 out.go:179] * Using the docker driver based on existing profile
	I1126 19:43:08.877365   47678 start.go:309] selected driver: docker
	I1126 19:43:08.877393   47678 start.go:927] validating driver "docker" against &{Name:functional-960066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-960066 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 19:43:08.877509   47678 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1126 19:43:08.877589   47678 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 19:43:08.941956   47678 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:57 SystemTime:2025-11-26 19:43:08.932530481 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1126 19:43:08.942740   47678 cni.go:84] Creating CNI manager for ""
	I1126 19:43:08.942803   47678 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 19:43:08.942846   47678 start.go:353] cluster config:
	{Name:functional-960066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-960066 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 19:43:08.945036   47678 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Nov 26 19:43:31 functional-960066 crio[3600]: time="2025-11-26T19:43:31.901058899Z" level=info msg="Trying to access \"docker.io/library/mysql:5.7\""
	Nov 26 19:43:37 functional-960066 crio[3600]: time="2025-11-26T19:43:37.930799051Z" level=info msg="Pulled image: docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da" id=b1af3518-0a0c-4c59-b8d1-2e36b492dcbf name=/runtime.v1.ImageService/PullImage
	Nov 26 19:43:37 functional-960066 crio[3600]: time="2025-11-26T19:43:37.931411177Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=0e79893e-a847-424d-ac2b-b65ed25daf02 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 19:43:37 functional-960066 crio[3600]: time="2025-11-26T19:43:37.933482913Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=b261f4e4-772d-4a11-bc5b-8a7ce6bd3878 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 19:43:37 functional-960066 crio[3600]: time="2025-11-26T19:43:37.940146795Z" level=info msg="Creating container: default/mysql-5bb876957f-xv8rn/mysql" id=a31958ac-2a78-4a70-9b31-e158c166d06c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 19:43:37 functional-960066 crio[3600]: time="2025-11-26T19:43:37.940269338Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 19:43:37 functional-960066 crio[3600]: time="2025-11-26T19:43:37.945245867Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 19:43:37 functional-960066 crio[3600]: time="2025-11-26T19:43:37.945825536Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 19:43:37 functional-960066 crio[3600]: time="2025-11-26T19:43:37.972657082Z" level=info msg="Created container 2e31a65977abdadd183880aed3aabd2b9ec052f15effc1f67d69ae5a8235a674: default/mysql-5bb876957f-xv8rn/mysql" id=a31958ac-2a78-4a70-9b31-e158c166d06c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 19:43:37 functional-960066 crio[3600]: time="2025-11-26T19:43:37.973240316Z" level=info msg="Starting container: 2e31a65977abdadd183880aed3aabd2b9ec052f15effc1f67d69ae5a8235a674" id=94ee1199-318a-4be6-aef7-e20d8eb4e6aa name=/runtime.v1.RuntimeService/StartContainer
	Nov 26 19:43:37 functional-960066 crio[3600]: time="2025-11-26T19:43:37.975445976Z" level=info msg="Started container" PID=7480 containerID=2e31a65977abdadd183880aed3aabd2b9ec052f15effc1f67d69ae5a8235a674 description=default/mysql-5bb876957f-xv8rn/mysql id=94ee1199-318a-4be6-aef7-e20d8eb4e6aa name=/runtime.v1.RuntimeService/StartContainer sandboxID=c8b8e842f3394f868c91d4f45ed3b553505bd0934fa5a68253f33ac14694e42c
	Nov 26 19:43:38 functional-960066 crio[3600]: time="2025-11-26T19:43:38.388622295Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=0147363e-cc25-434a-afdf-cb34c5a4b492 name=/runtime.v1.ImageService/PullImage
	Nov 26 19:43:49 functional-960066 crio[3600]: time="2025-11-26T19:43:49.389066467Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=773c2078-7176-4796-94ee-8b33ad8a3dde name=/runtime.v1.ImageService/PullImage
	Nov 26 19:44:07 functional-960066 crio[3600]: time="2025-11-26T19:44:07.38864073Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=774626b2-2c12-4776-a1a8-7589c481ed05 name=/runtime.v1.ImageService/PullImage
	Nov 26 19:44:15 functional-960066 crio[3600]: time="2025-11-26T19:44:15.389753595Z" level=info msg="Stopping pod sandbox: be64bbf00ed774594a6f2380e764d60659e60d77cb11b691185e1b18a03290f4" id=e3442279-0a62-40a4-be5e-bcb6ce6b8861 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 26 19:44:15 functional-960066 crio[3600]: time="2025-11-26T19:44:15.389809815Z" level=info msg="Stopped pod sandbox (already stopped): be64bbf00ed774594a6f2380e764d60659e60d77cb11b691185e1b18a03290f4" id=e3442279-0a62-40a4-be5e-bcb6ce6b8861 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 26 19:44:15 functional-960066 crio[3600]: time="2025-11-26T19:44:15.39009581Z" level=info msg="Removing pod sandbox: be64bbf00ed774594a6f2380e764d60659e60d77cb11b691185e1b18a03290f4" id=bba97f43-21d0-4a7f-b9dc-656aca7042ad name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 26 19:44:15 functional-960066 crio[3600]: time="2025-11-26T19:44:15.394745934Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 26 19:44:15 functional-960066 crio[3600]: time="2025-11-26T19:44:15.394846707Z" level=info msg="Removed pod sandbox: be64bbf00ed774594a6f2380e764d60659e60d77cb11b691185e1b18a03290f4" id=bba97f43-21d0-4a7f-b9dc-656aca7042ad name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 26 19:44:40 functional-960066 crio[3600]: time="2025-11-26T19:44:40.387925547Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=1f5d0147-9f16-4df4-8e0c-a5886f155c0a name=/runtime.v1.ImageService/PullImage
	Nov 26 19:44:49 functional-960066 crio[3600]: time="2025-11-26T19:44:49.388293716Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=d0c63eb7-035c-472a-b2d8-f89b457616b0 name=/runtime.v1.ImageService/PullImage
	Nov 26 19:46:13 functional-960066 crio[3600]: time="2025-11-26T19:46:13.388148703Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=c88e0f2b-13f1-48f2-a31c-76a3d9f7a5e7 name=/runtime.v1.ImageService/PullImage
	Nov 26 19:46:20 functional-960066 crio[3600]: time="2025-11-26T19:46:20.388803082Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=ab46d4a6-35cc-48db-b24d-b5d78880c65c name=/runtime.v1.ImageService/PullImage
	Nov 26 19:49:03 functional-960066 crio[3600]: time="2025-11-26T19:49:03.388390201Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=2da9eb95-cbeb-4bce-a7b8-bf9294d40803 name=/runtime.v1.ImageService/PullImage
	Nov 26 19:49:04 functional-960066 crio[3600]: time="2025-11-26T19:49:04.388734873Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=54f8d320-4778-4c9b-8676-a7fbb2ee026c name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	2e31a65977abd       docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da                  9 minutes ago       Running             mysql                       0                   c8b8e842f3394       mysql-5bb876957f-xv8rn                       default
	1bd20d02c38d2       docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541                  9 minutes ago       Running             myfrontend                  0                   7baac3c53935a       sp-pod                                       default
	e59b11f0bd0d5       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                  10 minutes ago      Running             nginx                       0                   ff793afcec660       nginx-svc                                    default
	fd5bd0aafa2ca       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029         10 minutes ago      Running             kubernetes-dashboard        0                   1576b70e063e7       kubernetes-dashboard-855c9754f9-d8l7h        kubernetes-dashboard
	9aaec156d5d63       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   10 minutes ago      Running             dashboard-metrics-scraper   0                   0c6f93efe2f9b       dashboard-metrics-scraper-77bf4d6c4c-6dgjq   kubernetes-dashboard
	74e0b1731bf1e       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998              10 minutes ago      Exited              mount-munger                0                   617514f27ea8f       busybox-mount                                default
	2deb3e50a4f33       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 10 minutes ago      Running             storage-provisioner         2                   718945acfdbe4       storage-provisioner                          kube-system
	b9da38fc3cdf6       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                 10 minutes ago      Running             kube-apiserver              0                   5288cd8c60be1       kube-apiserver-functional-960066             kube-system
	c0b72d5e719c6       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 10 minutes ago      Running             kube-controller-manager     1                   ab0841f1ad817       kube-controller-manager-functional-960066    kube-system
	f0849a47c08fe       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 10 minutes ago      Running             etcd                        1                   0a8633381e68b       etcd-functional-960066                       kube-system
	e918cf50a7de9       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 11 minutes ago      Running             kube-scheduler              1                   22ee04fab5e3a       kube-scheduler-functional-960066             kube-system
	73f52c3836a94       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Running             coredns                     1                   00b16d8f5f9b5       coredns-66bc5c9577-knghn                     kube-system
	33a0a9815f832       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Exited              storage-provisioner         1                   718945acfdbe4       storage-provisioner                          kube-system
	b7b81aef601d8       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 11 minutes ago      Running             kindnet-cni                 1                   fe45c332175b2       kindnet-fnxk2                                kube-system
	b45332b145561       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 11 minutes ago      Running             kube-proxy                  1                   4b971c0f6d744       kube-proxy-l8s6s                             kube-system
	3576edab545e3       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Exited              coredns                     0                   00b16d8f5f9b5       coredns-66bc5c9577-knghn                     kube-system
	a6f489dc80ef9       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 11 minutes ago      Exited              kube-proxy                  0                   4b971c0f6d744       kube-proxy-l8s6s                             kube-system
	a906ef55a65aa       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 11 minutes ago      Exited              kindnet-cni                 0                   fe45c332175b2       kindnet-fnxk2                                kube-system
	7e4f0d75f0c05       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 12 minutes ago      Exited              kube-scheduler              0                   22ee04fab5e3a       kube-scheduler-functional-960066             kube-system
	5de133e857a57       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 12 minutes ago      Exited              kube-controller-manager     0                   ab0841f1ad817       kube-controller-manager-functional-960066    kube-system
	fb52f465f23a6       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 12 minutes ago      Exited              etcd                        0                   0a8633381e68b       etcd-functional-960066                       kube-system
	
	
	==> coredns [3576edab545e376d21e53886f9386865f800650968237ff8b6215fe6547b10dd] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35516 - 22169 "HINFO IN 7607748148209076805.7789173916840661399. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.468997461s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [73f52c3836a9423a367da9591c1bc74a12856ff805a9ae3e39915aa3d0ef1c06] <==
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51899 - 14480 "HINFO IN 7837565878900824143.2384684586135177094. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.106993304s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               functional-960066
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-960066
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970
	                    minikube.k8s.io/name=functional-960066
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_26T19_41_21_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 26 Nov 2025 19:41:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-960066
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 26 Nov 2025 19:53:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 26 Nov 2025 19:51:47 +0000   Wed, 26 Nov 2025 19:41:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 26 Nov 2025 19:51:47 +0000   Wed, 26 Nov 2025 19:41:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 26 Nov 2025 19:51:47 +0000   Wed, 26 Nov 2025 19:41:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 26 Nov 2025 19:51:47 +0000   Wed, 26 Nov 2025 19:41:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-960066
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                61f72585-a946-4060-80d9-8d695a74b5b2
	  Boot ID:                    4ab83601-3d95-4490-96bc-14b416ba2714
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-rlq87                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-ffsgf           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-xv8rn                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     9m54s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m57s
	  kube-system                 coredns-66bc5c9577-knghn                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 etcd-functional-960066                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kindnet-fnxk2                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-functional-960066              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-960066     200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-l8s6s                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-functional-960066              100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-6dgjq    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-d8l7h         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node functional-960066 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node functional-960066 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node functional-960066 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                node-controller  Node functional-960066 event: Registered Node functional-960066 in Controller
	  Normal  NodeReady                11m                kubelet          Node functional-960066 status is now: NodeReady
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-960066 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-960066 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x8 over 11m)  kubelet          Node functional-960066 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node functional-960066 event: Registered Node functional-960066 in Controller
	
	
	==> dmesg <==
	[  +0.091832] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023778] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.010162] kauditd_printk_skb: 47 callbacks suppressed
	[Nov26 19:37] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.011177] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.022896] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.024869] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.022915] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.023864] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +2.047793] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +4.032568] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +8.126198] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[ +16.382331] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[Nov26 19:38] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	
	
	==> etcd [f0849a47c08febae8200a3ad1c4aa0aac7e477741f8d3b1b7b7f556b40e14722] <==
	{"level":"warn","ts":"2025-11-26T19:42:35.752579Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:42:35.759202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:42:35.765111Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:42:35.771769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:42:35.778344Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:42:35.792605Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:42:35.799593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:42:35.806480Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:42:35.813640Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:42:35.819449Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:42:35.825159Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:42:35.840223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:42:35.846441Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:42:35.853655Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:42:35.901523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39784","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-26T19:43:24.798249Z","caller":"traceutil/trace.go:172","msg":"trace[1779319312] transaction","detail":"{read_only:false; response_revision:772; number_of_response:1; }","duration":"129.213498ms","start":"2025-11-26T19:43:24.669009Z","end":"2025-11-26T19:43:24.798222Z","steps":["trace[1779319312] 'process raft request'  (duration: 77.594499ms)","trace[1779319312] 'compare'  (duration: 51.487511ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-26T19:43:38.951095Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"108.75607ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-11-26T19:43:38.951187Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"102.1859ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllers\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-11-26T19:43:38.951131Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"218.871811ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-26T19:43:38.951221Z","caller":"traceutil/trace.go:172","msg":"trace[1890363167] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:836; }","duration":"218.974201ms","start":"2025-11-26T19:43:38.732242Z","end":"2025-11-26T19:43:38.951216Z","steps":["trace[1890363167] 'range keys from in-memory index tree'  (duration: 218.804378ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-26T19:43:38.951221Z","caller":"traceutil/trace.go:172","msg":"trace[971434758] range","detail":"{range_begin:/registry/controllers; range_end:; response_count:0; response_revision:836; }","duration":"102.221172ms","start":"2025-11-26T19:43:38.848992Z","end":"2025-11-26T19:43:38.951213Z","steps":["trace[971434758] 'range keys from in-memory index tree'  (duration: 102.136306ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-26T19:43:38.951194Z","caller":"traceutil/trace.go:172","msg":"trace[1719680389] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:836; }","duration":"108.867516ms","start":"2025-11-26T19:43:38.842314Z","end":"2025-11-26T19:43:38.951181Z","steps":["trace[1719680389] 'range keys from in-memory index tree'  (duration: 108.675407ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-26T19:52:35.432221Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1118}
	{"level":"info","ts":"2025-11-26T19:52:35.450406Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1118,"took":"17.803312ms","hash":3808876643,"current-db-size-bytes":3424256,"current-db-size":"3.4 MB","current-db-size-in-use-bytes":1548288,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2025-11-26T19:52:35.450441Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3808876643,"revision":1118,"compact-revision":-1}
	
	
	==> etcd [fb52f465f23a6d4719d5ca58fe8d96dbfd910aaf6811577825a7575731ed5c9f] <==
	{"level":"warn","ts":"2025-11-26T19:41:18.322075Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:41:18.364554Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:42:13.439554Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:42:13.439554Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:42:13.456189Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:42:13.470834Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49218","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-26T19:42:13.473923Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-26T19:42:13.473985Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-960066","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"warn","ts":"2025-11-26T19:42:13.474102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49216","server-name":"","error":"read tcp 127.0.0.1:2379->127.0.0.1:49216: use of closed network connection"}
	{"level":"error","ts":"2025-11-26T19:42:13.474134Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-26T19:42:13.477544Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-26T19:42:13.477596Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-26T19:42:13.477614Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-11-26T19:42:13.477660Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-11-26T19:42:13.477676Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-11-26T19:42:13.477695Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-26T19:42:13.477734Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-26T19:42:13.477749Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-26T19:42:13.477682Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-26T19:42:13.477770Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-26T19:42:13.477782Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-26T19:42:13.479588Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-11-26T19:42:13.479633Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-26T19:42:13.479654Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-11-26T19:42:13.479675Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-960066","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 19:53:25 up 35 min,  0 user,  load average: 0.15, 0.21, 0.31
	Linux functional-960066 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a906ef55a65aa0351c53c98e48c3b35eb8df396c1e3cebab992a8bc84be3063b] <==
	I1126 19:41:27.400499       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1126 19:41:27.400771       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1126 19:41:27.400925       1 main.go:148] setting mtu 1500 for CNI 
	I1126 19:41:27.400943       1 main.go:178] kindnetd IP family: "ipv4"
	I1126 19:41:27.400973       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-26T19:41:27Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1126 19:41:27.602780       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1126 19:41:27.696056       1 controller.go:381] "Waiting for informer caches to sync"
	I1126 19:41:27.696168       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1126 19:41:27.696541       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1126 19:41:28.096320       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1126 19:41:28.096354       1 metrics.go:72] Registering metrics
	I1126 19:41:28.096441       1 controller.go:711] "Syncing nftables rules"
	I1126 19:41:37.603180       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 19:41:37.603217       1 main.go:301] handling current node
	I1126 19:41:47.603948       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 19:41:47.603981       1 main.go:301] handling current node
	I1126 19:41:57.607276       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 19:41:57.607307       1 main.go:301] handling current node
	
	
	==> kindnet [b7b81aef601d802bd20eb2b5e4f63bae8ef786500d92c465ffe442596c0ef3a8] <==
	I1126 19:51:23.532015       1 main.go:301] handling current node
	I1126 19:51:33.535143       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 19:51:33.535180       1 main.go:301] handling current node
	I1126 19:51:43.531536       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 19:51:43.531582       1 main.go:301] handling current node
	I1126 19:51:53.535980       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 19:51:53.536009       1 main.go:301] handling current node
	I1126 19:52:03.535472       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 19:52:03.535507       1 main.go:301] handling current node
	I1126 19:52:13.532550       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 19:52:13.532585       1 main.go:301] handling current node
	I1126 19:52:23.532869       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 19:52:23.532925       1 main.go:301] handling current node
	I1126 19:52:33.531567       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 19:52:33.531638       1 main.go:301] handling current node
	I1126 19:52:43.531584       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 19:52:43.531641       1 main.go:301] handling current node
	I1126 19:52:53.540696       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 19:52:53.540735       1 main.go:301] handling current node
	I1126 19:53:03.535898       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 19:53:03.535934       1 main.go:301] handling current node
	I1126 19:53:13.531073       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 19:53:13.531110       1 main.go:301] handling current node
	I1126 19:53:23.540147       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 19:53:23.540178       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b9da38fc3cdf63b2d90271540b78b85c529f8d440beeecf6cc3d75c7415a0355] <==
	I1126 19:42:37.240491       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1126 19:42:37.445298       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1126 19:42:37.446274       1 controller.go:667] quota admission added evaluator for: endpoints
	I1126 19:42:37.450513       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1126 19:42:37.560858       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1126 19:42:37.560858       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1126 19:42:37.560858       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1126 19:42:37.737129       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1126 19:42:37.821833       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1126 19:42:37.861850       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1126 19:42:37.866346       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1126 19:42:40.016480       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1126 19:43:02.578319       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.96.76.169"}
	I1126 19:43:06.724246       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.105.253.112"}
	I1126 19:43:09.796683       1 controller.go:667] quota admission added evaluator for: namespaces
	I1126 19:43:09.899545       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.167.223"}
	I1126 19:43:09.910879       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.87.58"}
	I1126 19:43:19.380344       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.100.56.85"}
	I1126 19:43:23.835114       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.101.227.178"}
	E1126 19:43:27.928818       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:41414: use of closed network connection
	I1126 19:43:31.534147       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.101.155.114"}
	E1126 19:43:36.856927       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:45198: use of closed network connection
	E1126 19:43:45.661831       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:45628: use of closed network connection
	E1126 19:43:46.748888       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:45644: use of closed network connection
	I1126 19:52:36.253020       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [5de133e857a572d142b440b7c8e05a7364a053f59779c64040143c043700f7a9] <==
	I1126 19:41:25.728717       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1126 19:41:25.728725       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1126 19:41:25.728930       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1126 19:41:25.729804       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1126 19:41:25.729841       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1126 19:41:25.729860       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1126 19:41:25.729930       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1126 19:41:25.730034       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1126 19:41:25.730045       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1126 19:41:25.730067       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1126 19:41:25.730088       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1126 19:41:25.730344       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1126 19:41:25.730345       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1126 19:41:25.729930       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1126 19:41:25.731008       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1126 19:41:25.732578       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1126 19:41:25.732646       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1126 19:41:25.732715       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1126 19:41:25.732746       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1126 19:41:25.732774       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1126 19:41:25.733524       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1126 19:41:25.734736       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1126 19:41:25.737736       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="functional-960066" podCIDRs=["10.244.0.0/24"]
	I1126 19:41:25.746915       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1126 19:41:40.699430       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [c0b72d5e719c6f748ace540626d687d662408b6ecdee533741d7034b6a88a480] <==
	I1126 19:42:39.662997       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1126 19:42:39.663048       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1126 19:42:39.663069       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1126 19:42:39.663067       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1126 19:42:39.664174       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1126 19:42:39.664190       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1126 19:42:39.664198       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1126 19:42:39.664178       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1126 19:42:39.667932       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1126 19:42:39.670590       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1126 19:42:39.672736       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1126 19:42:39.682009       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1126 19:42:39.684261       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1126 19:42:39.685415       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1126 19:42:39.687760       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1126 19:42:39.689997       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1126 19:42:39.691092       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1126 19:42:39.693265       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1126 19:42:39.700549       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1126 19:43:09.838356       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1126 19:43:09.841494       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1126 19:43:09.844366       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1126 19:43:09.847831       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1126 19:43:09.847788       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1126 19:43:09.852451       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [a6f489dc80ef9516261dffa6fe57016e8d7f4dbac5b564f1b30bdfece8a3c07d] <==
	I1126 19:41:27.246063       1 server_linux.go:53] "Using iptables proxy"
	I1126 19:41:27.301255       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1126 19:41:27.401945       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1126 19:41:27.401992       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1126 19:41:27.402092       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1126 19:41:27.420466       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1126 19:41:27.420526       1 server_linux.go:132] "Using iptables Proxier"
	I1126 19:41:27.425354       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1126 19:41:27.425826       1 server.go:527] "Version info" version="v1.34.1"
	I1126 19:41:27.425853       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 19:41:27.428325       1 config.go:200] "Starting service config controller"
	I1126 19:41:27.428349       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1126 19:41:27.428363       1 config.go:403] "Starting serviceCIDR config controller"
	I1126 19:41:27.428374       1 config.go:106] "Starting endpoint slice config controller"
	I1126 19:41:27.428385       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1126 19:41:27.428378       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1126 19:41:27.428428       1 config.go:309] "Starting node config controller"
	I1126 19:41:27.428444       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1126 19:41:27.428451       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1126 19:41:27.528842       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1126 19:41:27.528942       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1126 19:41:27.528960       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [b45332b1455613c55fbac8a2536091d000c6ab4545b6e2a6b2dc35471b011248] <==
	E1126 19:42:03.248591       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-960066&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1126 19:42:04.311293       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-960066&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1126 19:42:06.119001       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-960066&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1126 19:42:11.743395       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-960066&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1126 19:42:30.923120       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-960066&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1126 19:42:52.548275       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1126 19:42:52.548307       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1126 19:42:52.548400       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1126 19:42:52.566602       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1126 19:42:52.566654       1 server_linux.go:132] "Using iptables Proxier"
	I1126 19:42:52.571820       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1126 19:42:52.572169       1 server.go:527] "Version info" version="v1.34.1"
	I1126 19:42:52.572207       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 19:42:52.573411       1 config.go:200] "Starting service config controller"
	I1126 19:42:52.573433       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1126 19:42:52.573443       1 config.go:106] "Starting endpoint slice config controller"
	I1126 19:42:52.573467       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1126 19:42:52.573516       1 config.go:309] "Starting node config controller"
	I1126 19:42:52.573533       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1126 19:42:52.573540       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1126 19:42:52.573679       1 config.go:403] "Starting serviceCIDR config controller"
	I1126 19:42:52.573705       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1126 19:42:52.673890       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1126 19:42:52.673928       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1126 19:42:52.673890       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [7e4f0d75f0c054fd0f417b11fd817ef0ba2027b4e47104e99afc6712684e3ae8] <==
	E1126 19:41:18.769978       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1126 19:41:18.769978       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1126 19:41:18.770019       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1126 19:41:18.769983       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1126 19:41:18.770067       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1126 19:41:18.770094       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1126 19:41:18.770151       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1126 19:41:18.770210       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1126 19:41:19.589590       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1126 19:41:19.604534       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1126 19:41:19.727702       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1126 19:41:19.794130       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1126 19:41:19.831364       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1126 19:41:19.855532       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1126 19:41:19.869825       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1126 19:41:19.880820       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1126 19:41:19.936835       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1126 19:41:19.944835       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I1126 19:41:22.565950       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1126 19:42:02.745541       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1126 19:42:02.745572       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1126 19:42:02.745810       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1126 19:42:02.745881       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1126 19:42:02.745892       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1126 19:42:02.745913       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [e918cf50a7de9605c6ed94d7f70d4a6c330e4002d0762c2f6d9705380d5aa518] <==
	E1126 19:42:23.336527       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1126 19:42:23.892249       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1126 19:42:24.080417       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1126 19:42:24.135603       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1126 19:42:24.220226       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1126 19:42:27.321132       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1126 19:42:29.418034       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1126 19:42:29.633043       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1126 19:42:30.245405       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1126 19:42:30.428110       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1126 19:42:30.669354       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1126 19:42:31.627019       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1126 19:42:31.957979       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1126 19:42:32.706067       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1126 19:42:32.715229       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1126 19:42:32.895438       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1126 19:42:33.218542       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1126 19:42:33.807084       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1126 19:42:34.475994       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:43408->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1126 19:42:34.475994       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:43290->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1126 19:42:34.476016       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:43426->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1126 19:42:34.475993       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:43392->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1126 19:42:34.476003       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:43406->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1126 19:42:34.476093       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:43410->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I1126 19:42:57.867941       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 26 19:50:38 functional-960066 kubelet[4163]: E1126 19:50:38.388376    4163 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-ffsgf" podUID="33a96605-8568-4302-a7a6-3f4c314a18ac"
	Nov 26 19:50:43 functional-960066 kubelet[4163]: E1126 19:50:43.388275    4163 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-rlq87" podUID="ed64ff84-1e4c-4b77-b549-177b15a44351"
	Nov 26 19:50:49 functional-960066 kubelet[4163]: E1126 19:50:49.388197    4163 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-ffsgf" podUID="33a96605-8568-4302-a7a6-3f4c314a18ac"
	Nov 26 19:50:58 functional-960066 kubelet[4163]: E1126 19:50:58.388191    4163 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-rlq87" podUID="ed64ff84-1e4c-4b77-b549-177b15a44351"
	Nov 26 19:51:02 functional-960066 kubelet[4163]: E1126 19:51:02.387828    4163 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-ffsgf" podUID="33a96605-8568-4302-a7a6-3f4c314a18ac"
	Nov 26 19:51:09 functional-960066 kubelet[4163]: E1126 19:51:09.388534    4163 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-rlq87" podUID="ed64ff84-1e4c-4b77-b549-177b15a44351"
	Nov 26 19:51:17 functional-960066 kubelet[4163]: E1126 19:51:17.388199    4163 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-ffsgf" podUID="33a96605-8568-4302-a7a6-3f4c314a18ac"
	Nov 26 19:51:20 functional-960066 kubelet[4163]: E1126 19:51:20.387916    4163 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-rlq87" podUID="ed64ff84-1e4c-4b77-b549-177b15a44351"
	Nov 26 19:51:31 functional-960066 kubelet[4163]: E1126 19:51:31.388728    4163 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-ffsgf" podUID="33a96605-8568-4302-a7a6-3f4c314a18ac"
	Nov 26 19:51:33 functional-960066 kubelet[4163]: E1126 19:51:33.387899    4163 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-rlq87" podUID="ed64ff84-1e4c-4b77-b549-177b15a44351"
	Nov 26 19:51:46 functional-960066 kubelet[4163]: E1126 19:51:46.388164    4163 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-rlq87" podUID="ed64ff84-1e4c-4b77-b549-177b15a44351"
	Nov 26 19:51:46 functional-960066 kubelet[4163]: E1126 19:51:46.388217    4163 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-ffsgf" podUID="33a96605-8568-4302-a7a6-3f4c314a18ac"
	Nov 26 19:52:00 functional-960066 kubelet[4163]: E1126 19:52:00.388375    4163 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-ffsgf" podUID="33a96605-8568-4302-a7a6-3f4c314a18ac"
	Nov 26 19:52:01 functional-960066 kubelet[4163]: E1126 19:52:01.388023    4163 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-rlq87" podUID="ed64ff84-1e4c-4b77-b549-177b15a44351"
	Nov 26 19:52:14 functional-960066 kubelet[4163]: E1126 19:52:14.387523    4163 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-ffsgf" podUID="33a96605-8568-4302-a7a6-3f4c314a18ac"
	Nov 26 19:52:15 functional-960066 kubelet[4163]: E1126 19:52:15.388331    4163 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-rlq87" podUID="ed64ff84-1e4c-4b77-b549-177b15a44351"
	Nov 26 19:52:27 functional-960066 kubelet[4163]: E1126 19:52:27.388441    4163 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-ffsgf" podUID="33a96605-8568-4302-a7a6-3f4c314a18ac"
	Nov 26 19:52:28 functional-960066 kubelet[4163]: E1126 19:52:28.387537    4163 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-rlq87" podUID="ed64ff84-1e4c-4b77-b549-177b15a44351"
	Nov 26 19:52:42 functional-960066 kubelet[4163]: E1126 19:52:42.388381    4163 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-ffsgf" podUID="33a96605-8568-4302-a7a6-3f4c314a18ac"
	Nov 26 19:52:42 functional-960066 kubelet[4163]: E1126 19:52:42.388550    4163 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-rlq87" podUID="ed64ff84-1e4c-4b77-b549-177b15a44351"
	Nov 26 19:52:55 functional-960066 kubelet[4163]: E1126 19:52:55.388676    4163 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-rlq87" podUID="ed64ff84-1e4c-4b77-b549-177b15a44351"
	Nov 26 19:52:57 functional-960066 kubelet[4163]: E1126 19:52:57.387955    4163 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-ffsgf" podUID="33a96605-8568-4302-a7a6-3f4c314a18ac"
	Nov 26 19:53:10 functional-960066 kubelet[4163]: E1126 19:53:10.388439    4163 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-rlq87" podUID="ed64ff84-1e4c-4b77-b549-177b15a44351"
	Nov 26 19:53:12 functional-960066 kubelet[4163]: E1126 19:53:12.388033    4163 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-ffsgf" podUID="33a96605-8568-4302-a7a6-3f4c314a18ac"
	Nov 26 19:53:21 functional-960066 kubelet[4163]: E1126 19:53:21.387690    4163 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-rlq87" podUID="ed64ff84-1e4c-4b77-b549-177b15a44351"
	
	
	==> kubernetes-dashboard [fd5bd0aafa2cab25b3d7c0b4a74995c2c17afe102891b096ad72729556702e37] <==
	2025/11/26 19:43:17 Starting overwatch
	2025/11/26 19:43:17 Using namespace: kubernetes-dashboard
	2025/11/26 19:43:17 Using in-cluster config to connect to apiserver
	2025/11/26 19:43:17 Using secret token for csrf signing
	2025/11/26 19:43:17 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/26 19:43:17 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/26 19:43:17 Successful initial request to the apiserver, version: v1.34.1
	2025/11/26 19:43:17 Generating JWE encryption key
	2025/11/26 19:43:17 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/26 19:43:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/26 19:43:17 Initializing JWE encryption key from synchronized object
	2025/11/26 19:43:17 Creating in-cluster Sidecar client
	2025/11/26 19:43:17 Successful request to sidecar
	2025/11/26 19:43:17 Serving insecurely on HTTP port: 9090
	
	
	==> storage-provisioner [2deb3e50a4f33ebd488cf631449e2dab2722f2d6f353222390fc2b1b222cc5ae] <==
	W1126 19:53:01.213858       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:53:03.216309       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:53:03.219811       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:53:05.222759       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:53:05.226614       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:53:07.229502       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:53:07.233045       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:53:09.235075       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:53:09.238480       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:53:11.242282       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:53:11.247194       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:53:13.250009       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:53:13.253863       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:53:15.256537       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:53:15.261009       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:53:17.264349       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:53:17.268206       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:53:19.270941       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:53:19.275632       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:53:21.278448       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:53:21.282196       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:53:23.284485       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:53:23.289109       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:53:25.291590       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:53:25.295272       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [33a0a9815f832513ac14d7adeba2b5296fe9e9b7cc3467025a2980607dcc50a2] <==
	I1126 19:42:03.143202       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1126 19:42:03.146510       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-960066 -n functional-960066
helpers_test.go:269: (dbg) Run:  kubectl --context functional-960066 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-rlq87 hello-node-connect-7d85dfc575-ffsgf
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-960066 describe pod busybox-mount hello-node-75c85bcc94-rlq87 hello-node-connect-7d85dfc575-ffsgf
helpers_test.go:290: (dbg) kubectl --context functional-960066 describe pod busybox-mount hello-node-75c85bcc94-rlq87 hello-node-connect-7d85dfc575-ffsgf:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-960066/192.168.49.2
	Start Time:       Wed, 26 Nov 2025 19:43:08 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  mount-munger:
	    Container ID:  cri-o://74e0b1731bf1ef1d5ff66141516ddf1673874ed8a09af65141e00f7b0922ba52
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Wed, 26 Nov 2025 19:43:09 +0000
	      Finished:     Wed, 26 Nov 2025 19:43:09 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9x4zj (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-9x4zj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10m   default-scheduler  Successfully assigned default/busybox-mount to functional-960066
	  Normal  Pulling    10m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 715ms (715ms including waiting). Image size: 4631262 bytes.
	  Normal  Created    10m   kubelet            Created container: mount-munger
	  Normal  Started    10m   kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-rlq87
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-960066/192.168.49.2
	Start Time:       Wed, 26 Nov 2025 19:43:06 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j6tlv (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-j6tlv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-rlq87 to functional-960066
	  Normal   Pulling    7m13s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m13s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m13s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    16s (x41 over 10m)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     16s (x41 over 10m)   kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-ffsgf
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-960066/192.168.49.2
	Start Time:       Wed, 26 Nov 2025 19:43:23 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:           10.244.0.10
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rmjh4 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-rmjh4:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-ffsgf to functional-960066
	  Normal   Pulling    7m6s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m6s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m6s (x5 over 10m)    kubelet            Error: ErrImagePull
	  Warning  Failed     4m58s (x20 over 10m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m47s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (602.68s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-960066 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-960066 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-rlq87" [ed64ff84-1e4c-4b77-b549-177b15a44351] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-960066 -n functional-960066
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-11-26 19:53:07.046208985 +0000 UTC m=+1099.971638585
functional_test.go:1460: (dbg) Run:  kubectl --context functional-960066 describe po hello-node-75c85bcc94-rlq87 -n default
functional_test.go:1460: (dbg) kubectl --context functional-960066 describe po hello-node-75c85bcc94-rlq87 -n default:
Name:             hello-node-75c85bcc94-rlq87
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-960066/192.168.49.2
Start Time:       Wed, 26 Nov 2025 19:43:06 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:           10.244.0.4
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j6tlv (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-j6tlv:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-rlq87 to functional-960066
Normal   Pulling    6m54s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m54s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m54s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     4m58s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m46s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-960066 logs hello-node-75c85bcc94-rlq87 -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-960066 logs hello-node-75c85bcc94-rlq87 -n default: exit status 1 (66.120667ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-rlq87" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-960066 logs hello-node-75c85bcc94-rlq87 -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 image load --daemon kicbase/echo-server:functional-960066 --alsologtostderr
2025/11/26 19:43:17 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-960066" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 image load --daemon kicbase/echo-server:functional-960066 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-amd64 -p functional-960066 image load --daemon kicbase/echo-server:functional-960066 --alsologtostderr: (1.308217914s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-960066" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-960066
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 image load --daemon kicbase/echo-server:functional-960066 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-960066" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 image save kicbase/echo-server:functional-960066 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1126 19:43:23.109811   51896 out.go:360] Setting OutFile to fd 1 ...
	I1126 19:43:23.109958   51896 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:43:23.109966   51896 out.go:374] Setting ErrFile to fd 2...
	I1126 19:43:23.109970   51896 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:43:23.110126   51896 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-10722/.minikube/bin
	I1126 19:43:23.110621   51896 config.go:182] Loaded profile config "functional-960066": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:43:23.110712   51896 config.go:182] Loaded profile config "functional-960066": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:43:23.111135   51896 cli_runner.go:164] Run: docker container inspect functional-960066 --format={{.State.Status}}
	I1126 19:43:23.128688   51896 ssh_runner.go:195] Run: systemctl --version
	I1126 19:43:23.128750   51896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-960066
	I1126 19:43:23.145921   51896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/functional-960066/id_rsa Username:docker}
	I1126 19:43:23.241453   51896 cache_images.go:291] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar
	W1126 19:43:23.241536   51896 cache_images.go:255] Failed to load cached images for "functional-960066": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar: no such file or directory
	I1126 19:43:23.241566   51896 cache_images.go:267] failed pushing to: functional-960066

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-960066
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 image save --daemon kicbase/echo-server:functional-960066 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-960066
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-960066: exit status 1 (16.782371ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-960066

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-960066

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-960066 service --namespace=default --https --url hello-node: exit status 115 (519.866513ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:31252
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-960066 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-960066 service hello-node --url --format={{.IP}}: exit status 115 (519.058854ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-960066 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-960066 service hello-node --url: exit status 115 (515.312783ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:31252
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-960066 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31252
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.52s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.41s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-251841 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-251841 --output=json --user=testUser: exit status 80 (2.406046034s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"40429b39-f1f9-460b-a8aa-4f0e48ede051","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-251841 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"7fc1a5aa-700a-4e16-bfad-3796e84dc846","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-26T20:03:44Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"1aa0e822-511c-41eb-97bd-308d81917053","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-251841 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.41s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (2.12s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-251841 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-251841 --output=json --user=testUser: exit status 80 (2.115144942s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3c690146-619c-442c-aa34-31ad54ca296f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-251841 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"53e47fbc-0705-497c-9fba-73076721c3ad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-26T20:03:46Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"8f8e26d2-b0f7-4b18-9cfd-b5b345eacf82","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-251841 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (2.12s)

                                                
                                    
x
+
TestPause/serial/Pause (6.28s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-088343 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-088343 --alsologtostderr -v=5: exit status 80 (2.548413606s)

                                                
                                                
-- stdout --
	* Pausing node pause-088343 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1126 20:17:16.595534  206311 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:17:16.595791  206311 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:17:16.595801  206311 out.go:374] Setting ErrFile to fd 2...
	I1126 20:17:16.595806  206311 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:17:16.596011  206311 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-10722/.minikube/bin
	I1126 20:17:16.596212  206311 out.go:368] Setting JSON to false
	I1126 20:17:16.596227  206311 mustload.go:66] Loading cluster: pause-088343
	I1126 20:17:16.596642  206311 config.go:182] Loaded profile config "pause-088343": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:17:16.597132  206311 cli_runner.go:164] Run: docker container inspect pause-088343 --format={{.State.Status}}
	I1126 20:17:16.615819  206311 host.go:66] Checking if "pause-088343" exists ...
	I1126 20:17:16.616089  206311 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:17:16.679667  206311 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:56 OomKillDisable:false NGoroutines:73 SystemTime:2025-11-26 20:17:16.669104588 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1126 20:17:16.680336  206311 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-088343 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1126 20:17:16.873656  206311 out.go:179] * Pausing node pause-088343 ... 
	I1126 20:17:16.919726  206311 host.go:66] Checking if "pause-088343" exists ...
	I1126 20:17:16.920059  206311 ssh_runner.go:195] Run: systemctl --version
	I1126 20:17:16.920110  206311 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-088343
	I1126 20:17:16.940363  206311 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32973 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/pause-088343/id_rsa Username:docker}
	I1126 20:17:17.037434  206311 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:17:17.049379  206311 pause.go:52] kubelet running: true
	I1126 20:17:17.049434  206311 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1126 20:17:17.186003  206311 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1126 20:17:17.186088  206311 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1126 20:17:17.250672  206311 cri.go:89] found id: "23efe5ed2d5129e2520b991840514eab0b95ecff061cdeed4dce4a50852925f9"
	I1126 20:17:17.250692  206311 cri.go:89] found id: "ded1aa6ae8ad4b032f860f18ec98c2321c1aecc1632c746bf85cfd4786777f49"
	I1126 20:17:17.250696  206311 cri.go:89] found id: "08f40eeb695e63781aee8a6dd64716e465781b63c96d5ecc21e5a1895bc0c4a0"
	I1126 20:17:17.250699  206311 cri.go:89] found id: "38113599f674298b9b3fea5b2a7c1e4c46276f1c818aa294ebd11f463d2b5429"
	I1126 20:17:17.250702  206311 cri.go:89] found id: "8fff4af24a8946fc7daabaa1caacec5d7deeaf3f92227b0f6671f78b34598cb6"
	I1126 20:17:17.250705  206311 cri.go:89] found id: "6ec60134d42facb0d523b0d825b2d62cae44913de8328d8a2ed7b3bcba96b481"
	I1126 20:17:17.250708  206311 cri.go:89] found id: "77ab3602e99c51dd1062dba3be62901ab816c07d84dd7894b76a2f95e108f6c0"
	I1126 20:17:17.250711  206311 cri.go:89] found id: ""
	I1126 20:17:17.250746  206311 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 20:17:17.262278  206311 retry.go:31] will retry after 201.434192ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:17:17Z" level=error msg="open /run/runc: no such file or directory"
	I1126 20:17:17.464668  206311 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:17:17.478819  206311 pause.go:52] kubelet running: false
	I1126 20:17:17.478881  206311 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1126 20:17:17.587973  206311 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1126 20:17:17.588051  206311 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1126 20:17:17.651622  206311 cri.go:89] found id: "23efe5ed2d5129e2520b991840514eab0b95ecff061cdeed4dce4a50852925f9"
	I1126 20:17:17.651648  206311 cri.go:89] found id: "ded1aa6ae8ad4b032f860f18ec98c2321c1aecc1632c746bf85cfd4786777f49"
	I1126 20:17:17.651654  206311 cri.go:89] found id: "08f40eeb695e63781aee8a6dd64716e465781b63c96d5ecc21e5a1895bc0c4a0"
	I1126 20:17:17.651661  206311 cri.go:89] found id: "38113599f674298b9b3fea5b2a7c1e4c46276f1c818aa294ebd11f463d2b5429"
	I1126 20:17:17.651665  206311 cri.go:89] found id: "8fff4af24a8946fc7daabaa1caacec5d7deeaf3f92227b0f6671f78b34598cb6"
	I1126 20:17:17.651669  206311 cri.go:89] found id: "6ec60134d42facb0d523b0d825b2d62cae44913de8328d8a2ed7b3bcba96b481"
	I1126 20:17:17.651674  206311 cri.go:89] found id: "77ab3602e99c51dd1062dba3be62901ab816c07d84dd7894b76a2f95e108f6c0"
	I1126 20:17:17.651679  206311 cri.go:89] found id: ""
	I1126 20:17:17.651729  206311 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 20:17:17.662808  206311 retry.go:31] will retry after 290.73487ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:17:17Z" level=error msg="open /run/runc: no such file or directory"
	I1126 20:17:17.954347  206311 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:17:17.967107  206311 pause.go:52] kubelet running: false
	I1126 20:17:17.967168  206311 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1126 20:17:18.073287  206311 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1126 20:17:18.073370  206311 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1126 20:17:18.153476  206311 cri.go:89] found id: "23efe5ed2d5129e2520b991840514eab0b95ecff061cdeed4dce4a50852925f9"
	I1126 20:17:18.153498  206311 cri.go:89] found id: "ded1aa6ae8ad4b032f860f18ec98c2321c1aecc1632c746bf85cfd4786777f49"
	I1126 20:17:18.153505  206311 cri.go:89] found id: "08f40eeb695e63781aee8a6dd64716e465781b63c96d5ecc21e5a1895bc0c4a0"
	I1126 20:17:18.153510  206311 cri.go:89] found id: "38113599f674298b9b3fea5b2a7c1e4c46276f1c818aa294ebd11f463d2b5429"
	I1126 20:17:18.153515  206311 cri.go:89] found id: "8fff4af24a8946fc7daabaa1caacec5d7deeaf3f92227b0f6671f78b34598cb6"
	I1126 20:17:18.153519  206311 cri.go:89] found id: "6ec60134d42facb0d523b0d825b2d62cae44913de8328d8a2ed7b3bcba96b481"
	I1126 20:17:18.153523  206311 cri.go:89] found id: "77ab3602e99c51dd1062dba3be62901ab816c07d84dd7894b76a2f95e108f6c0"
	I1126 20:17:18.153527  206311 cri.go:89] found id: ""
	I1126 20:17:18.153568  206311 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 20:17:18.165836  206311 retry.go:31] will retry after 682.723196ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:17:18Z" level=error msg="open /run/runc: no such file or directory"
	I1126 20:17:18.849657  206311 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:17:18.864719  206311 pause.go:52] kubelet running: false
	I1126 20:17:18.864770  206311 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1126 20:17:18.995570  206311 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1126 20:17:18.995653  206311 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1126 20:17:19.067325  206311 cri.go:89] found id: "23efe5ed2d5129e2520b991840514eab0b95ecff061cdeed4dce4a50852925f9"
	I1126 20:17:19.067344  206311 cri.go:89] found id: "ded1aa6ae8ad4b032f860f18ec98c2321c1aecc1632c746bf85cfd4786777f49"
	I1126 20:17:19.067349  206311 cri.go:89] found id: "08f40eeb695e63781aee8a6dd64716e465781b63c96d5ecc21e5a1895bc0c4a0"
	I1126 20:17:19.067363  206311 cri.go:89] found id: "38113599f674298b9b3fea5b2a7c1e4c46276f1c818aa294ebd11f463d2b5429"
	I1126 20:17:19.067368  206311 cri.go:89] found id: "8fff4af24a8946fc7daabaa1caacec5d7deeaf3f92227b0f6671f78b34598cb6"
	I1126 20:17:19.067373  206311 cri.go:89] found id: "6ec60134d42facb0d523b0d825b2d62cae44913de8328d8a2ed7b3bcba96b481"
	I1126 20:17:19.067378  206311 cri.go:89] found id: "77ab3602e99c51dd1062dba3be62901ab816c07d84dd7894b76a2f95e108f6c0"
	I1126 20:17:19.067383  206311 cri.go:89] found id: ""
	I1126 20:17:19.067424  206311 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 20:17:19.081174  206311 out.go:203] 
	W1126 20:17:19.082280  206311 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:17:19Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:17:19Z" level=error msg="open /run/runc: no such file or directory"
	
	W1126 20:17:19.082298  206311 out.go:285] * 
	* 
	W1126 20:17:19.086156  206311 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1126 20:17:19.087517  206311 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-088343 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-088343
helpers_test.go:243: (dbg) docker inspect pause-088343:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "19d374e09e092cc18ef2914c8812a160dc6483c533082a3d68792078892e0723",
	        "Created": "2025-11-26T20:16:03.801250454Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 185281,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-26T20:16:03.841698262Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/19d374e09e092cc18ef2914c8812a160dc6483c533082a3d68792078892e0723/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/19d374e09e092cc18ef2914c8812a160dc6483c533082a3d68792078892e0723/hostname",
	        "HostsPath": "/var/lib/docker/containers/19d374e09e092cc18ef2914c8812a160dc6483c533082a3d68792078892e0723/hosts",
	        "LogPath": "/var/lib/docker/containers/19d374e09e092cc18ef2914c8812a160dc6483c533082a3d68792078892e0723/19d374e09e092cc18ef2914c8812a160dc6483c533082a3d68792078892e0723-json.log",
	        "Name": "/pause-088343",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-088343:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-088343",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "19d374e09e092cc18ef2914c8812a160dc6483c533082a3d68792078892e0723",
	                "LowerDir": "/var/lib/docker/overlay2/66139dc8129f18048e737c32330c0854b27229245478eff75ae47a543b1b4ea6-init/diff:/var/lib/docker/overlay2/1661d775281cb7314a654558ee2c4e5880ffafb3a27a085032449cd60a68753a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/66139dc8129f18048e737c32330c0854b27229245478eff75ae47a543b1b4ea6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/66139dc8129f18048e737c32330c0854b27229245478eff75ae47a543b1b4ea6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/66139dc8129f18048e737c32330c0854b27229245478eff75ae47a543b1b4ea6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-088343",
	                "Source": "/var/lib/docker/volumes/pause-088343/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-088343",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-088343",
	                "name.minikube.sigs.k8s.io": "pause-088343",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "5984d0bc20a107bcd2cc7945ae887cc691d291c34ac31538f25817887a9337fd",
	            "SandboxKey": "/var/run/docker/netns/5984d0bc20a1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32973"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32974"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32977"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32975"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32976"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-088343": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "53ea54025484ac2c9df8cec28a4fbb6a8eb5da8d25f389978ebd3b8f51588cdb",
	                    "EndpointID": "acea68d82aff9a5b2467980fc663ff543304e9c393d23c254342c6a76a91ee9d",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "1a:05:75:24:f2:47",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-088343",
	                        "19d374e09e09"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-088343 -n pause-088343
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-088343 -n pause-088343: exit status 2 (336.107843ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-088343 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-088343 logs -n 25: (1.069092469s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p scheduled-stop-926822 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-926822       │ jenkins │ v1.37.0 │ 26 Nov 25 20:14 UTC │ 26 Nov 25 20:15 UTC │
	│ delete  │ -p scheduled-stop-926822                                                                                                                 │ scheduled-stop-926822       │ jenkins │ v1.37.0 │ 26 Nov 25 20:15 UTC │ 26 Nov 25 20:15 UTC │
	│ start   │ -p insufficient-storage-946161 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio                         │ insufficient-storage-946161 │ jenkins │ v1.37.0 │ 26 Nov 25 20:15 UTC │                     │
	│ delete  │ -p insufficient-storage-946161                                                                                                           │ insufficient-storage-946161 │ jenkins │ v1.37.0 │ 26 Nov 25 20:15 UTC │ 26 Nov 25 20:15 UTC │
	│ start   │ -p NoKubernetes-237154 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                            │ NoKubernetes-237154         │ jenkins │ v1.37.0 │ 26 Nov 25 20:15 UTC │                     │
	│ start   │ -p pause-088343 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-088343                │ jenkins │ v1.37.0 │ 26 Nov 25 20:15 UTC │ 26 Nov 25 20:17 UTC │
	│ start   │ -p force-systemd-env-093715 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                               │ force-systemd-env-093715    │ jenkins │ v1.37.0 │ 26 Nov 25 20:15 UTC │ 26 Nov 25 20:16 UTC │
	│ start   │ -p offline-crio-073078 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio                        │ offline-crio-073078         │ jenkins │ v1.37.0 │ 26 Nov 25 20:15 UTC │ 26 Nov 25 20:17 UTC │
	│ start   │ -p NoKubernetes-237154 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                    │ NoKubernetes-237154         │ jenkins │ v1.37.0 │ 26 Nov 25 20:15 UTC │ 26 Nov 25 20:16 UTC │
	│ start   │ -p NoKubernetes-237154 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-237154         │ jenkins │ v1.37.0 │ 26 Nov 25 20:16 UTC │ 26 Nov 25 20:16 UTC │
	│ delete  │ -p force-systemd-env-093715                                                                                                              │ force-systemd-env-093715    │ jenkins │ v1.37.0 │ 26 Nov 25 20:16 UTC │ 26 Nov 25 20:16 UTC │
	│ start   │ -p missing-upgrade-521324 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-521324      │ jenkins │ v1.35.0 │ 26 Nov 25 20:16 UTC │ 26 Nov 25 20:17 UTC │
	│ delete  │ -p NoKubernetes-237154                                                                                                                   │ NoKubernetes-237154         │ jenkins │ v1.37.0 │ 26 Nov 25 20:16 UTC │ 26 Nov 25 20:16 UTC │
	│ start   │ -p NoKubernetes-237154 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-237154         │ jenkins │ v1.37.0 │ 26 Nov 25 20:16 UTC │ 26 Nov 25 20:16 UTC │
	│ ssh     │ -p NoKubernetes-237154 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-237154         │ jenkins │ v1.37.0 │ 26 Nov 25 20:16 UTC │                     │
	│ stop    │ -p NoKubernetes-237154                                                                                                                   │ NoKubernetes-237154         │ jenkins │ v1.37.0 │ 26 Nov 25 20:16 UTC │ 26 Nov 25 20:16 UTC │
	│ start   │ -p NoKubernetes-237154 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-237154         │ jenkins │ v1.37.0 │ 26 Nov 25 20:16 UTC │ 26 Nov 25 20:17 UTC │
	│ ssh     │ -p NoKubernetes-237154 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-237154         │ jenkins │ v1.37.0 │ 26 Nov 25 20:17 UTC │                     │
	│ delete  │ -p NoKubernetes-237154                                                                                                                   │ NoKubernetes-237154         │ jenkins │ v1.37.0 │ 26 Nov 25 20:17 UTC │ 26 Nov 25 20:17 UTC │
	│ start   │ -p kubernetes-upgrade-225144 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-225144   │ jenkins │ v1.37.0 │ 26 Nov 25 20:17 UTC │                     │
	│ delete  │ -p offline-crio-073078                                                                                                                   │ offline-crio-073078         │ jenkins │ v1.37.0 │ 26 Nov 25 20:17 UTC │ 26 Nov 25 20:17 UTC │
	│ start   │ -p pause-088343 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-088343                │ jenkins │ v1.37.0 │ 26 Nov 25 20:17 UTC │ 26 Nov 25 20:17 UTC │
	│ start   │ -p stopped-upgrade-211103 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-211103      │ jenkins │ v1.35.0 │ 26 Nov 25 20:17 UTC │                     │
	│ start   │ -p missing-upgrade-521324 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-521324      │ jenkins │ v1.37.0 │ 26 Nov 25 20:17 UTC │                     │
	│ pause   │ -p pause-088343 --alsologtostderr -v=5                                                                                                   │ pause-088343                │ jenkins │ v1.37.0 │ 26 Nov 25 20:17 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/26 20:17:13
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1126 20:17:13.569544  205232 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:17:13.569670  205232 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:17:13.569682  205232 out.go:374] Setting ErrFile to fd 2...
	I1126 20:17:13.569689  205232 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:17:13.569994  205232 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-10722/.minikube/bin
	I1126 20:17:13.570526  205232 out.go:368] Setting JSON to false
	I1126 20:17:13.571802  205232 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3584,"bootTime":1764184650,"procs":282,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1126 20:17:13.571857  205232 start.go:143] virtualization: kvm guest
	I1126 20:17:13.573276  205232 out.go:179] * [missing-upgrade-521324] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1126 20:17:13.575259  205232 notify.go:221] Checking for updates...
	I1126 20:17:13.575268  205232 out.go:179]   - MINIKUBE_LOCATION=21974
	I1126 20:17:13.577133  205232 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1126 20:17:13.581011  205232 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21974-10722/kubeconfig
	I1126 20:17:13.582859  205232 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-10722/.minikube
	I1126 20:17:13.584848  205232 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1126 20:17:13.586223  205232 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1126 20:17:13.587893  205232 config.go:182] Loaded profile config "missing-upgrade-521324": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1126 20:17:13.589736  205232 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1126 20:17:13.594925  205232 driver.go:422] Setting default libvirt URI to qemu:///system
	I1126 20:17:13.622662  205232 docker.go:124] docker version: linux-29.0.4:Docker Engine - Community
	I1126 20:17:13.622774  205232 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:17:13.696037  205232 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:false NGoroutines:88 SystemTime:2025-11-26 20:17:13.684779089 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1126 20:17:13.696192  205232 docker.go:319] overlay module found
	I1126 20:17:13.698261  205232 out.go:179] * Using the docker driver based on existing profile
	I1126 20:17:13.699398  205232 start.go:309] selected driver: docker
	I1126 20:17:13.699412  205232 start.go:927] validating driver "docker" against &{Name:missing-upgrade-521324 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:missing-upgrade-521324 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:17:13.699582  205232 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1126 20:17:13.700339  205232 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:17:13.791247  205232 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:85 SystemTime:2025-11-26 20:17:13.774204325 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1126 20:17:13.791758  205232 cni.go:84] Creating CNI manager for ""
	I1126 20:17:13.791889  205232 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:17:13.791972  205232 start.go:353] cluster config:
	{Name:missing-upgrade-521324 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:missing-upgrade-521324 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:17:13.794945  205232 out.go:179] * Starting "missing-upgrade-521324" primary control-plane node in "missing-upgrade-521324" cluster
	I1126 20:17:13.795985  205232 cache.go:134] Beginning downloading kic base image for docker with crio
	I1126 20:17:13.797150  205232 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1126 20:17:13.798182  205232 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1126 20:17:13.798215  205232 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21974-10722/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1126 20:17:13.798250  205232 cache.go:65] Caching tarball of preloaded images
	I1126 20:17:13.798282  205232 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I1126 20:17:13.798381  205232 preload.go:238] Found /home/jenkins/minikube-integration/21974-10722/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1126 20:17:13.798396  205232 cache.go:68] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1126 20:17:13.798543  205232 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/missing-upgrade-521324/config.json ...
	I1126 20:17:13.825935  205232 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon, skipping pull
	I1126 20:17:13.825959  205232 cache.go:158] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in daemon, skipping load
	I1126 20:17:13.825977  205232 cache.go:243] Successfully downloaded all kic artifacts
	I1126 20:17:13.826024  205232 start.go:360] acquireMachinesLock for missing-upgrade-521324: {Name:mk63135e99d868a2faf91fd11fac0b75a0ab9998 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1126 20:17:13.826098  205232 start.go:364] duration metric: took 42.147µs to acquireMachinesLock for "missing-upgrade-521324"
	I1126 20:17:13.826126  205232 start.go:96] Skipping create...Using existing machine configuration
	I1126 20:17:13.826135  205232 fix.go:54] fixHost starting: 
	I1126 20:17:13.826419  205232 cli_runner.go:164] Run: docker container inspect missing-upgrade-521324 --format={{.State.Status}}
	W1126 20:17:13.847158  205232 cli_runner.go:211] docker container inspect missing-upgrade-521324 --format={{.State.Status}} returned with exit code 1
	I1126 20:17:13.847221  205232 fix.go:112] recreateIfNeeded on missing-upgrade-521324: state= err=unknown state "missing-upgrade-521324": docker container inspect missing-upgrade-521324 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-521324
	I1126 20:17:13.847267  205232 fix.go:117] machineExists: false. err=machine does not exist
	I1126 20:17:13.848522  205232 out.go:179] * docker "missing-upgrade-521324" container is missing, will recreate.
	I1126 20:17:10.315614  201907 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21974-10722/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-225144:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir: (4.999065472s)
	I1126 20:17:10.315648  201907 kic.go:203] duration metric: took 4.999269089s to extract preloaded images to volume ...
	W1126 20:17:10.315771  201907 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1126 20:17:10.315805  201907 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1126 20:17:10.315847  201907 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1126 20:17:10.380969  201907 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-225144 --name kubernetes-upgrade-225144 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-225144 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-225144 --network kubernetes-upgrade-225144 --ip 192.168.103.2 --volume kubernetes-upgrade-225144:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1126 20:17:10.684720  201907 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-225144 --format={{.State.Running}}
	I1126 20:17:10.703885  201907 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-225144 --format={{.State.Status}}
	I1126 20:17:10.729309  201907 cli_runner.go:164] Run: docker exec kubernetes-upgrade-225144 stat /var/lib/dpkg/alternatives/iptables
	I1126 20:17:10.779571  201907 oci.go:144] the created container "kubernetes-upgrade-225144" has a running status.
	I1126 20:17:10.779601  201907 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21974-10722/.minikube/machines/kubernetes-upgrade-225144/id_rsa...
	I1126 20:17:10.944022  201907 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21974-10722/.minikube/machines/kubernetes-upgrade-225144/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1126 20:17:10.976289  201907 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-225144 --format={{.State.Status}}
	I1126 20:17:11.005554  201907 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1126 20:17:11.005573  201907 kic_runner.go:114] Args: [docker exec --privileged kubernetes-upgrade-225144 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1126 20:17:11.068793  201907 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-225144 --format={{.State.Status}}
	I1126 20:17:11.091429  201907 machine.go:94] provisionDockerMachine start ...
	I1126 20:17:11.091563  201907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-225144
	I1126 20:17:11.113549  201907 main.go:143] libmachine: Using SSH client type: native
	I1126 20:17:11.113910  201907 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33008 <nil> <nil>}
	I1126 20:17:11.113933  201907 main.go:143] libmachine: About to run SSH command:
	hostname
	I1126 20:17:11.256921  201907 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-225144
	
	I1126 20:17:11.256946  201907 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-225144"
	I1126 20:17:11.257012  201907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-225144
	I1126 20:17:11.276997  201907 main.go:143] libmachine: Using SSH client type: native
	I1126 20:17:11.277197  201907 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33008 <nil> <nil>}
	I1126 20:17:11.277210  201907 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-225144 && echo "kubernetes-upgrade-225144" | sudo tee /etc/hostname
	I1126 20:17:11.427390  201907 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-225144
	
	I1126 20:17:11.427549  201907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-225144
	I1126 20:17:11.446892  201907 main.go:143] libmachine: Using SSH client type: native
	I1126 20:17:11.447178  201907 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33008 <nil> <nil>}
	I1126 20:17:11.447206  201907 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-225144' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-225144/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-225144' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1126 20:17:11.593738  201907 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1126 20:17:11.593765  201907 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21974-10722/.minikube CaCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21974-10722/.minikube}
	I1126 20:17:11.593791  201907 ubuntu.go:190] setting up certificates
	I1126 20:17:11.593801  201907 provision.go:84] configureAuth start
	I1126 20:17:11.593854  201907 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-225144
	I1126 20:17:11.612303  201907 provision.go:143] copyHostCerts
	I1126 20:17:11.612360  201907 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-10722/.minikube/ca.pem, removing ...
	I1126 20:17:11.612371  201907 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-10722/.minikube/ca.pem
	I1126 20:17:11.612429  201907 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/ca.pem (1078 bytes)
	I1126 20:17:11.612539  201907 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-10722/.minikube/cert.pem, removing ...
	I1126 20:17:11.612549  201907 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-10722/.minikube/cert.pem
	I1126 20:17:11.612578  201907 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/cert.pem (1123 bytes)
	I1126 20:17:11.612637  201907 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-10722/.minikube/key.pem, removing ...
	I1126 20:17:11.612647  201907 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-10722/.minikube/key.pem
	I1126 20:17:11.612682  201907 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/key.pem (1675 bytes)
	I1126 20:17:11.612745  201907 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-225144 san=[127.0.0.1 192.168.103.2 kubernetes-upgrade-225144 localhost minikube]
	I1126 20:17:11.816900  201907 provision.go:177] copyRemoteCerts
	I1126 20:17:11.816963  201907 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1126 20:17:11.817001  201907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-225144
	I1126 20:17:11.837271  201907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33008 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/kubernetes-upgrade-225144/id_rsa Username:docker}
	I1126 20:17:11.948703  201907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1126 20:17:11.971551  201907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1126 20:17:11.992582  201907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1126 20:17:12.013929  201907 provision.go:87] duration metric: took 420.113772ms to configureAuth
	I1126 20:17:12.013955  201907 ubuntu.go:206] setting minikube options for container-runtime
	I1126 20:17:12.014147  201907 config.go:182] Loaded profile config "kubernetes-upgrade-225144": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1126 20:17:12.014267  201907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-225144
	I1126 20:17:12.034862  201907 main.go:143] libmachine: Using SSH client type: native
	I1126 20:17:12.035164  201907 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33008 <nil> <nil>}
	I1126 20:17:12.035192  201907 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1126 20:17:12.346703  201907 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1126 20:17:12.346743  201907 machine.go:97] duration metric: took 1.255291253s to provisionDockerMachine
	I1126 20:17:12.346756  201907 client.go:176] duration metric: took 7.535982772s to LocalClient.Create
	I1126 20:17:12.346775  201907 start.go:167] duration metric: took 7.536044281s to libmachine.API.Create "kubernetes-upgrade-225144"
	I1126 20:17:12.346786  201907 start.go:293] postStartSetup for "kubernetes-upgrade-225144" (driver="docker")
	I1126 20:17:12.346798  201907 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1126 20:17:12.346889  201907 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1126 20:17:12.346941  201907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-225144
	I1126 20:17:12.369307  201907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33008 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/kubernetes-upgrade-225144/id_rsa Username:docker}
	I1126 20:17:12.477075  201907 ssh_runner.go:195] Run: cat /etc/os-release
	I1126 20:17:12.480763  201907 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1126 20:17:12.480799  201907 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1126 20:17:12.480811  201907 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-10722/.minikube/addons for local assets ...
	I1126 20:17:12.480856  201907 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-10722/.minikube/files for local assets ...
	I1126 20:17:12.480929  201907 filesync.go:149] local asset: /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem -> 142582.pem in /etc/ssl/certs
	I1126 20:17:12.481013  201907 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1126 20:17:12.489736  201907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem --> /etc/ssl/certs/142582.pem (1708 bytes)
	I1126 20:17:12.510830  201907 start.go:296] duration metric: took 164.029867ms for postStartSetup
	I1126 20:17:12.511191  201907 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-225144
	I1126 20:17:12.530789  201907 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kubernetes-upgrade-225144/config.json ...
	I1126 20:17:12.531059  201907 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1126 20:17:12.531109  201907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-225144
	I1126 20:17:12.550536  201907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33008 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/kubernetes-upgrade-225144/id_rsa Username:docker}
	I1126 20:17:12.648946  201907 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1126 20:17:12.653845  201907 start.go:128] duration metric: took 7.844914797s to createHost
	I1126 20:17:12.653872  201907 start.go:83] releasing machines lock for "kubernetes-upgrade-225144", held for 7.845050584s
	I1126 20:17:12.653944  201907 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-225144
	I1126 20:17:12.672283  201907 ssh_runner.go:195] Run: cat /version.json
	I1126 20:17:12.672343  201907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-225144
	I1126 20:17:12.672349  201907 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1126 20:17:12.672416  201907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-225144
	I1126 20:17:12.694371  201907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33008 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/kubernetes-upgrade-225144/id_rsa Username:docker}
	I1126 20:17:12.694691  201907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33008 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/kubernetes-upgrade-225144/id_rsa Username:docker}
	I1126 20:17:12.845598  201907 ssh_runner.go:195] Run: systemctl --version
	I1126 20:17:12.851624  201907 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1126 20:17:12.886589  201907 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1126 20:17:12.891533  201907 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1126 20:17:12.891612  201907 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1126 20:17:12.960655  201907 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1126 20:17:12.960679  201907 start.go:496] detecting cgroup driver to use...
	I1126 20:17:12.960712  201907 detect.go:190] detected "systemd" cgroup driver on host os
	I1126 20:17:12.960759  201907 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1126 20:17:12.978339  201907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1126 20:17:12.991736  201907 docker.go:218] disabling cri-docker service (if available) ...
	I1126 20:17:12.991807  201907 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1126 20:17:13.009419  201907 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1126 20:17:13.027088  201907 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1126 20:17:13.140907  201907 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1126 20:17:13.250479  201907 docker.go:234] disabling docker service ...
	I1126 20:17:13.250549  201907 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1126 20:17:13.271146  201907 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1126 20:17:13.284068  201907 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1126 20:17:13.377256  201907 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1126 20:17:13.485962  201907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1126 20:17:13.542746  201907 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1126 20:17:13.567386  201907 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1126 20:17:13.567441  201907 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:17:13.582517  201907 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1126 20:17:13.582576  201907 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:17:13.593612  201907 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:17:13.605021  201907 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:17:13.616059  201907 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1126 20:17:13.625938  201907 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:17:13.635895  201907 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:17:13.657569  201907 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:17:13.669769  201907 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1126 20:17:13.680409  201907 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1126 20:17:13.689086  201907 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:17:13.809381  201907 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1126 20:17:13.990747  201907 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1126 20:17:13.990811  201907 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1126 20:17:13.994662  201907 start.go:564] Will wait 60s for crictl version
	I1126 20:17:13.994719  201907 ssh_runner.go:195] Run: which crictl
	I1126 20:17:13.998246  201907 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1126 20:17:14.024637  201907 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1126 20:17:14.024723  201907 ssh_runner.go:195] Run: crio --version
	I1126 20:17:14.054496  201907 ssh_runner.go:195] Run: crio --version
	I1126 20:17:14.088566  201907 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.2 ...
	I1126 20:17:13.474943  202639 cli_runner.go:164] Run: docker network inspect pause-088343 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1126 20:17:13.494067  202639 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1126 20:17:13.498430  202639 kubeadm.go:884] updating cluster {Name:pause-088343 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-088343 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1126 20:17:13.498632  202639 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:17:13.498693  202639 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:17:13.567740  202639 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:17:13.568159  202639 crio.go:433] Images already preloaded, skipping extraction
	I1126 20:17:13.568244  202639 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:17:13.600551  202639 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:17:13.600580  202639 cache_images.go:86] Images are preloaded, skipping loading
	I1126 20:17:13.600606  202639 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1126 20:17:13.600745  202639 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-088343 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-088343 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1126 20:17:13.600826  202639 ssh_runner.go:195] Run: crio config
	I1126 20:17:13.665422  202639 cni.go:84] Creating CNI manager for ""
	I1126 20:17:13.665451  202639 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:17:13.665497  202639 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1126 20:17:13.665527  202639 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-088343 NodeName:pause-088343 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1126 20:17:13.665686  202639 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-088343"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1126 20:17:13.665767  202639 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1126 20:17:13.679656  202639 binaries.go:51] Found k8s binaries, skipping transfer
	I1126 20:17:13.679717  202639 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1126 20:17:13.689081  202639 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1126 20:17:13.703017  202639 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1126 20:17:13.717353  202639 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1126 20:17:13.738259  202639 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1126 20:17:13.744851  202639 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:17:13.888221  202639 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:17:13.903419  202639 certs.go:69] Setting up /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/pause-088343 for IP: 192.168.76.2
	I1126 20:17:13.903443  202639 certs.go:195] generating shared ca certs ...
	I1126 20:17:13.903559  202639 certs.go:227] acquiring lock for ca certs: {Name:mk4795cc985d88cb969cdd6a9c35d3c72f02dfea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:17:13.903748  202639 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21974-10722/.minikube/ca.key
	I1126 20:17:13.903811  202639 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.key
	I1126 20:17:13.903826  202639 certs.go:257] generating profile certs ...
	I1126 20:17:13.903950  202639 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/pause-088343/client.key
	I1126 20:17:13.904033  202639 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/pause-088343/apiserver.key.faa3837e
	I1126 20:17:13.904089  202639 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/pause-088343/proxy-client.key
	I1126 20:17:13.904269  202639 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/14258.pem (1338 bytes)
	W1126 20:17:13.904317  202639 certs.go:480] ignoring /home/jenkins/minikube-integration/21974-10722/.minikube/certs/14258_empty.pem, impossibly tiny 0 bytes
	I1126 20:17:13.904334  202639 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem (1679 bytes)
	I1126 20:17:13.904370  202639 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem (1078 bytes)
	I1126 20:17:13.904409  202639 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem (1123 bytes)
	I1126 20:17:13.904449  202639 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem (1675 bytes)
	I1126 20:17:13.904537  202639 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem (1708 bytes)
	I1126 20:17:13.905337  202639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1126 20:17:13.926411  202639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1126 20:17:13.949384  202639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1126 20:17:13.968745  202639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1126 20:17:13.989759  202639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/pause-088343/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1126 20:17:14.007915  202639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/pause-088343/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1126 20:17:14.027711  202639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/pause-088343/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1126 20:17:14.046791  202639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/pause-088343/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1126 20:17:14.065884  202639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem --> /usr/share/ca-certificates/142582.pem (1708 bytes)
	I1126 20:17:14.083958  202639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1126 20:17:14.104746  202639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/certs/14258.pem --> /usr/share/ca-certificates/14258.pem (1338 bytes)
	I1126 20:17:14.124264  202639 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1126 20:17:14.137893  202639 ssh_runner.go:195] Run: openssl version
	I1126 20:17:14.145707  202639 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142582.pem && ln -fs /usr/share/ca-certificates/142582.pem /etc/ssl/certs/142582.pem"
	I1126 20:17:14.156039  202639 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142582.pem
	I1126 20:17:14.160525  202639 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 26 19:41 /usr/share/ca-certificates/142582.pem
	I1126 20:17:14.160577  202639 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142582.pem
	I1126 20:17:14.204558  202639 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142582.pem /etc/ssl/certs/3ec20f2e.0"
	I1126 20:17:14.214470  202639 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1126 20:17:14.223376  202639 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:17:14.227253  202639 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 26 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:17:14.227303  202639 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:17:14.276386  202639 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1126 20:17:14.284156  202639 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14258.pem && ln -fs /usr/share/ca-certificates/14258.pem /etc/ssl/certs/14258.pem"
	I1126 20:17:14.293757  202639 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14258.pem
	I1126 20:17:14.297569  202639 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 26 19:41 /usr/share/ca-certificates/14258.pem
	I1126 20:17:14.297623  202639 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14258.pem
	I1126 20:17:14.089883  201907 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-225144 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1126 20:17:14.108037  201907 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1126 20:17:14.111984  201907 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:17:14.122171  201907 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-225144 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-225144 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1126 20:17:14.122285  201907 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1126 20:17:14.122335  201907 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:17:14.157344  201907 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:17:14.157367  201907 crio.go:433] Images already preloaded, skipping extraction
	I1126 20:17:14.157419  201907 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:17:14.184769  201907 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:17:14.184797  201907 cache_images.go:86] Images are preloaded, skipping loading
	I1126 20:17:14.184806  201907 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.28.0 crio true true} ...
	I1126 20:17:14.184924  201907 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kubernetes-upgrade-225144 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-225144 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1126 20:17:14.185011  201907 ssh_runner.go:195] Run: crio config
	I1126 20:17:14.245009  201907 cni.go:84] Creating CNI manager for ""
	I1126 20:17:14.245045  201907 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:17:14.245065  201907 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1126 20:17:14.245091  201907 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-225144 NodeName:kubernetes-upgrade-225144 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1126 20:17:14.245263  201907 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-225144"
	  kubeletExtraArgs:
	    node-ip: 192.168.103.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1126 20:17:14.245341  201907 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1126 20:17:14.253373  201907 binaries.go:51] Found k8s binaries, skipping transfer
	I1126 20:17:14.253434  201907 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1126 20:17:14.261180  201907 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I1126 20:17:14.273306  201907 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1126 20:17:14.289524  201907 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2165 bytes)
	I1126 20:17:14.303495  201907 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1126 20:17:14.307162  201907 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:17:14.316738  201907 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:17:14.409364  201907 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:17:14.435899  201907 certs.go:69] Setting up /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kubernetes-upgrade-225144 for IP: 192.168.103.2
	I1126 20:17:14.435917  201907 certs.go:195] generating shared ca certs ...
	I1126 20:17:14.435936  201907 certs.go:227] acquiring lock for ca certs: {Name:mk4795cc985d88cb969cdd6a9c35d3c72f02dfea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:17:14.436092  201907 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21974-10722/.minikube/ca.key
	I1126 20:17:14.436208  201907 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.key
	I1126 20:17:14.436226  201907 certs.go:257] generating profile certs ...
	I1126 20:17:14.436292  201907 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kubernetes-upgrade-225144/client.key
	I1126 20:17:14.436308  201907 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kubernetes-upgrade-225144/client.crt with IP's: []
	I1126 20:17:14.526512  201907 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kubernetes-upgrade-225144/client.crt ...
	I1126 20:17:14.526537  201907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kubernetes-upgrade-225144/client.crt: {Name:mk8ca0ed83be291ec3801953a1afb0810c00f08b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:17:14.526695  201907 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kubernetes-upgrade-225144/client.key ...
	I1126 20:17:14.526711  201907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kubernetes-upgrade-225144/client.key: {Name:mkaed1bf6ed48885d6fb38f3f0eee4801835e41e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:17:14.526821  201907 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kubernetes-upgrade-225144/apiserver.key.4f7b8804
	I1126 20:17:14.526838  201907 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kubernetes-upgrade-225144/apiserver.crt.4f7b8804 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1126 20:17:14.573320  201907 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kubernetes-upgrade-225144/apiserver.crt.4f7b8804 ...
	I1126 20:17:14.573343  201907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kubernetes-upgrade-225144/apiserver.crt.4f7b8804: {Name:mk31a74ff9d2aa51c58ad70994ca7a15a1607fe9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:17:14.573505  201907 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kubernetes-upgrade-225144/apiserver.key.4f7b8804 ...
	I1126 20:17:14.573523  201907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kubernetes-upgrade-225144/apiserver.key.4f7b8804: {Name:mk2df4e22a98e02fc26fa8ad159d779f5e628c8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:17:14.573629  201907 certs.go:382] copying /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kubernetes-upgrade-225144/apiserver.crt.4f7b8804 -> /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kubernetes-upgrade-225144/apiserver.crt
	I1126 20:17:14.573708  201907 certs.go:386] copying /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kubernetes-upgrade-225144/apiserver.key.4f7b8804 -> /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kubernetes-upgrade-225144/apiserver.key
	I1126 20:17:14.573767  201907 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kubernetes-upgrade-225144/proxy-client.key
	I1126 20:17:14.573782  201907 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kubernetes-upgrade-225144/proxy-client.crt with IP's: []
	I1126 20:17:14.334507  202639 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14258.pem /etc/ssl/certs/51391683.0"
	I1126 20:17:14.342268  202639 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1126 20:17:14.346156  202639 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1126 20:17:14.385351  202639 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1126 20:17:14.423257  202639 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1126 20:17:14.476107  202639 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1126 20:17:14.513302  202639 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1126 20:17:14.549057  202639 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1126 20:17:14.583839  202639 kubeadm.go:401] StartCluster: {Name:pause-088343 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-088343 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:17:14.583973  202639 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 20:17:14.584039  202639 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 20:17:14.612779  202639 cri.go:89] found id: "23efe5ed2d5129e2520b991840514eab0b95ecff061cdeed4dce4a50852925f9"
	I1126 20:17:14.612807  202639 cri.go:89] found id: "ded1aa6ae8ad4b032f860f18ec98c2321c1aecc1632c746bf85cfd4786777f49"
	I1126 20:17:14.612814  202639 cri.go:89] found id: "08f40eeb695e63781aee8a6dd64716e465781b63c96d5ecc21e5a1895bc0c4a0"
	I1126 20:17:14.612819  202639 cri.go:89] found id: "38113599f674298b9b3fea5b2a7c1e4c46276f1c818aa294ebd11f463d2b5429"
	I1126 20:17:14.612823  202639 cri.go:89] found id: "8fff4af24a8946fc7daabaa1caacec5d7deeaf3f92227b0f6671f78b34598cb6"
	I1126 20:17:14.612828  202639 cri.go:89] found id: "6ec60134d42facb0d523b0d825b2d62cae44913de8328d8a2ed7b3bcba96b481"
	I1126 20:17:14.612832  202639 cri.go:89] found id: "77ab3602e99c51dd1062dba3be62901ab816c07d84dd7894b76a2f95e108f6c0"
	I1126 20:17:14.612837  202639 cri.go:89] found id: ""
	I1126 20:17:14.612880  202639 ssh_runner.go:195] Run: sudo runc list -f json
	W1126 20:17:14.624296  202639 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:17:14Z" level=error msg="open /run/runc: no such file or directory"
	I1126 20:17:14.624359  202639 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1126 20:17:14.632215  202639 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1126 20:17:14.632233  202639 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1126 20:17:14.632275  202639 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1126 20:17:14.640289  202639 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1126 20:17:14.641192  202639 kubeconfig.go:125] found "pause-088343" server: "https://192.168.76.2:8443"
	I1126 20:17:14.642380  202639 kapi.go:59] client config for pause-088343: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21974-10722/.minikube/profiles/pause-088343/client.crt", KeyFile:"/home/jenkins/minikube-integration/21974-10722/.minikube/profiles/pause-088343/client.key", CAFile:"/home/jenkins/minikube-integration/21974-10722/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]stri
ng(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2815480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1126 20:17:14.642888  202639 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1126 20:17:14.642912  202639 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1126 20:17:14.642919  202639 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1126 20:17:14.642925  202639 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1126 20:17:14.642931  202639 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1126 20:17:14.643242  202639 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1126 20:17:14.650642  202639 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1126 20:17:14.650673  202639 kubeadm.go:602] duration metric: took 18.434547ms to restartPrimaryControlPlane
	I1126 20:17:14.650683  202639 kubeadm.go:403] duration metric: took 66.852446ms to StartCluster
	I1126 20:17:14.650695  202639 settings.go:142] acquiring lock: {Name:mkeab3f1dbcfb88a16fda32bff13c520a9a811ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:17:14.650745  202639 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21974-10722/kubeconfig
	I1126 20:17:14.651663  202639 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/kubeconfig: {Name:mk6fc897a3190afd853d8f239c80394df10dbd39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:17:14.651908  202639 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1126 20:17:14.651971  202639 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1126 20:17:14.652260  202639 config.go:182] Loaded profile config "pause-088343": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:17:14.653724  202639 out.go:179] * Enabled addons: 
	I1126 20:17:14.653730  202639 out.go:179] * Verifying Kubernetes components...
	I1126 20:17:14.649579  201907 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kubernetes-upgrade-225144/proxy-client.crt ...
	I1126 20:17:14.649602  201907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kubernetes-upgrade-225144/proxy-client.crt: {Name:mk9de2ab2e4b93572b6a95d7385771a05d0e808a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:17:14.649746  201907 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kubernetes-upgrade-225144/proxy-client.key ...
	I1126 20:17:14.649762  201907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kubernetes-upgrade-225144/proxy-client.key: {Name:mk54b8d481d43878c8f56b75e3a8a0b524eb6308 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:17:14.649974  201907 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/14258.pem (1338 bytes)
	W1126 20:17:14.650021  201907 certs.go:480] ignoring /home/jenkins/minikube-integration/21974-10722/.minikube/certs/14258_empty.pem, impossibly tiny 0 bytes
	I1126 20:17:14.650037  201907 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem (1679 bytes)
	I1126 20:17:14.650075  201907 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem (1078 bytes)
	I1126 20:17:14.650110  201907 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem (1123 bytes)
	I1126 20:17:14.650144  201907 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem (1675 bytes)
	I1126 20:17:14.650206  201907 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem (1708 bytes)
	I1126 20:17:14.650894  201907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1126 20:17:14.668940  201907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1126 20:17:14.691646  201907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1126 20:17:14.712596  201907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1126 20:17:14.729121  201907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kubernetes-upgrade-225144/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1126 20:17:14.745908  201907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kubernetes-upgrade-225144/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1126 20:17:14.762986  201907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kubernetes-upgrade-225144/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1126 20:17:14.780644  201907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kubernetes-upgrade-225144/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1126 20:17:14.799794  201907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem --> /usr/share/ca-certificates/142582.pem (1708 bytes)
	I1126 20:17:14.820180  201907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1126 20:17:14.839417  201907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/certs/14258.pem --> /usr/share/ca-certificates/14258.pem (1338 bytes)
	I1126 20:17:14.869917  201907 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1126 20:17:14.883764  201907 ssh_runner.go:195] Run: openssl version
	I1126 20:17:14.890407  201907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142582.pem && ln -fs /usr/share/ca-certificates/142582.pem /etc/ssl/certs/142582.pem"
	I1126 20:17:14.898761  201907 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142582.pem
	I1126 20:17:14.902206  201907 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 26 19:41 /usr/share/ca-certificates/142582.pem
	I1126 20:17:14.902261  201907 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142582.pem
	I1126 20:17:14.936659  201907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142582.pem /etc/ssl/certs/3ec20f2e.0"
	I1126 20:17:14.945848  201907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1126 20:17:14.954193  201907 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:17:14.958373  201907 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 26 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:17:14.958441  201907 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:17:14.997642  201907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1126 20:17:15.006634  201907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14258.pem && ln -fs /usr/share/ca-certificates/14258.pem /etc/ssl/certs/14258.pem"
	I1126 20:17:15.014797  201907 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14258.pem
	I1126 20:17:15.018274  201907 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 26 19:41 /usr/share/ca-certificates/14258.pem
	I1126 20:17:15.018330  201907 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14258.pem
	I1126 20:17:15.052504  201907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14258.pem /etc/ssl/certs/51391683.0"
	I1126 20:17:15.062263  201907 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1126 20:17:15.065773  201907 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1126 20:17:15.065828  201907 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-225144 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-225144 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:17:15.065913  201907 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 20:17:15.065965  201907 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 20:17:15.092209  201907 cri.go:89] found id: ""
	I1126 20:17:15.092282  201907 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1126 20:17:15.100292  201907 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1126 20:17:15.108041  201907 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1126 20:17:15.108102  201907 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1126 20:17:15.115297  201907 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1126 20:17:15.115319  201907 kubeadm.go:158] found existing configuration files:
	
	I1126 20:17:15.115363  201907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1126 20:17:15.123104  201907 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1126 20:17:15.123156  201907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1126 20:17:15.130094  201907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1126 20:17:15.137300  201907 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1126 20:17:15.137349  201907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1126 20:17:15.144353  201907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1126 20:17:15.151898  201907 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1126 20:17:15.151946  201907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1126 20:17:15.158840  201907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1126 20:17:15.166162  201907 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1126 20:17:15.166224  201907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1126 20:17:15.173684  201907 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1126 20:17:15.222885  201907 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1126 20:17:15.222993  201907 kubeadm.go:319] [preflight] Running pre-flight checks
	I1126 20:17:15.262120  201907 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1126 20:17:15.262216  201907 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1126 20:17:15.262296  201907 kubeadm.go:319] OS: Linux
	I1126 20:17:15.262373  201907 kubeadm.go:319] CGROUPS_CPU: enabled
	I1126 20:17:15.262443  201907 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1126 20:17:15.262541  201907 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1126 20:17:15.262619  201907 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1126 20:17:15.262693  201907 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1126 20:17:15.262769  201907 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1126 20:17:15.262842  201907 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1126 20:17:15.262911  201907 kubeadm.go:319] CGROUPS_IO: enabled
	I1126 20:17:15.329721  201907 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1126 20:17:15.329875  201907 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1126 20:17:15.330038  201907 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1126 20:17:15.483898  201907 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1126 20:17:14.657819  202639 addons.go:530] duration metric: took 5.852205ms for enable addons: enabled=[]
	I1126 20:17:14.657849  202639 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:17:14.768156  202639 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:17:14.782622  202639 node_ready.go:35] waiting up to 6m0s for node "pause-088343" to be "Ready" ...
	I1126 20:17:14.790398  202639 node_ready.go:49] node "pause-088343" is "Ready"
	I1126 20:17:14.790429  202639 node_ready.go:38] duration metric: took 7.779156ms for node "pause-088343" to be "Ready" ...
	I1126 20:17:14.790445  202639 api_server.go:52] waiting for apiserver process to appear ...
	I1126 20:17:14.790507  202639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:17:14.802026  202639 api_server.go:72] duration metric: took 150.084217ms to wait for apiserver process to appear ...
	I1126 20:17:14.802051  202639 api_server.go:88] waiting for apiserver healthz status ...
	I1126 20:17:14.802068  202639 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1126 20:17:14.806638  202639 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1126 20:17:14.807620  202639 api_server.go:141] control plane version: v1.34.1
	I1126 20:17:14.807644  202639 api_server.go:131] duration metric: took 5.586341ms to wait for apiserver health ...
	I1126 20:17:14.807652  202639 system_pods.go:43] waiting for kube-system pods to appear ...
	I1126 20:17:14.810680  202639 system_pods.go:59] 7 kube-system pods found
	I1126 20:17:14.810719  202639 system_pods.go:61] "coredns-66bc5c9577-npkd9" [4e96858c-42d7-4bb9-a5a9-252f2585bf9b] Running
	I1126 20:17:14.810734  202639 system_pods.go:61] "etcd-pause-088343" [5f9e8c9b-fb82-4c5c-a067-e4f9d8d58f0d] Running
	I1126 20:17:14.810740  202639 system_pods.go:61] "kindnet-s6tf4" [48867150-fa26-4bb5-91d9-91a5d5d2f6ee] Running
	I1126 20:17:14.810749  202639 system_pods.go:61] "kube-apiserver-pause-088343" [96890fe3-682e-44c9-87f6-5f5d9a409126] Running
	I1126 20:17:14.810755  202639 system_pods.go:61] "kube-controller-manager-pause-088343" [062bce23-d1a8-47d1-a42e-86331d362308] Running
	I1126 20:17:14.810761  202639 system_pods.go:61] "kube-proxy-j4rc4" [7036e338-edf1-43c2-b5b3-213e285bdd62] Running
	I1126 20:17:14.810766  202639 system_pods.go:61] "kube-scheduler-pause-088343" [ac9f62f1-364b-4832-bf3d-9c76acbb00bf] Running
	I1126 20:17:14.810777  202639 system_pods.go:74] duration metric: took 3.11854ms to wait for pod list to return data ...
	I1126 20:17:14.810785  202639 default_sa.go:34] waiting for default service account to be created ...
	I1126 20:17:14.812745  202639 default_sa.go:45] found service account: "default"
	I1126 20:17:14.812765  202639 default_sa.go:55] duration metric: took 1.97142ms for default service account to be created ...
	I1126 20:17:14.812775  202639 system_pods.go:116] waiting for k8s-apps to be running ...
	I1126 20:17:14.815432  202639 system_pods.go:86] 7 kube-system pods found
	I1126 20:17:14.815470  202639 system_pods.go:89] "coredns-66bc5c9577-npkd9" [4e96858c-42d7-4bb9-a5a9-252f2585bf9b] Running
	I1126 20:17:14.815480  202639 system_pods.go:89] "etcd-pause-088343" [5f9e8c9b-fb82-4c5c-a067-e4f9d8d58f0d] Running
	I1126 20:17:14.815486  202639 system_pods.go:89] "kindnet-s6tf4" [48867150-fa26-4bb5-91d9-91a5d5d2f6ee] Running
	I1126 20:17:14.815494  202639 system_pods.go:89] "kube-apiserver-pause-088343" [96890fe3-682e-44c9-87f6-5f5d9a409126] Running
	I1126 20:17:14.815500  202639 system_pods.go:89] "kube-controller-manager-pause-088343" [062bce23-d1a8-47d1-a42e-86331d362308] Running
	I1126 20:17:14.815506  202639 system_pods.go:89] "kube-proxy-j4rc4" [7036e338-edf1-43c2-b5b3-213e285bdd62] Running
	I1126 20:17:14.815511  202639 system_pods.go:89] "kube-scheduler-pause-088343" [ac9f62f1-364b-4832-bf3d-9c76acbb00bf] Running
	I1126 20:17:14.815518  202639 system_pods.go:126] duration metric: took 2.73742ms to wait for k8s-apps to be running ...
	I1126 20:17:14.815532  202639 system_svc.go:44] waiting for kubelet service to be running ....
	I1126 20:17:14.815576  202639 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:17:14.830082  202639 system_svc.go:56] duration metric: took 14.542615ms WaitForService to wait for kubelet
	I1126 20:17:14.830109  202639 kubeadm.go:587] duration metric: took 178.16907ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1126 20:17:14.830131  202639 node_conditions.go:102] verifying NodePressure condition ...
	I1126 20:17:14.832244  202639 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1126 20:17:14.832265  202639 node_conditions.go:123] node cpu capacity is 8
	I1126 20:17:14.832280  202639 node_conditions.go:105] duration metric: took 2.143451ms to run NodePressure ...
	I1126 20:17:14.832294  202639 start.go:242] waiting for startup goroutines ...
	I1126 20:17:14.832305  202639 start.go:247] waiting for cluster config update ...
	I1126 20:17:14.832318  202639 start.go:256] writing updated cluster config ...
	I1126 20:17:14.832633  202639 ssh_runner.go:195] Run: rm -f paused
	I1126 20:17:14.836186  202639 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 20:17:14.836940  202639 kapi.go:59] client config for pause-088343: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21974-10722/.minikube/profiles/pause-088343/client.crt", KeyFile:"/home/jenkins/minikube-integration/21974-10722/.minikube/profiles/pause-088343/client.key", CAFile:"/home/jenkins/minikube-integration/21974-10722/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]stri
ng(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2815480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1126 20:17:14.839728  202639 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-npkd9" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:17:14.847177  202639 pod_ready.go:94] pod "coredns-66bc5c9577-npkd9" is "Ready"
	I1126 20:17:14.847204  202639 pod_ready.go:86] duration metric: took 7.451897ms for pod "coredns-66bc5c9577-npkd9" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:17:14.852958  202639 pod_ready.go:83] waiting for pod "etcd-pause-088343" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:17:14.860276  202639 pod_ready.go:94] pod "etcd-pause-088343" is "Ready"
	I1126 20:17:14.860302  202639 pod_ready.go:86] duration metric: took 7.323261ms for pod "etcd-pause-088343" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:17:14.868231  202639 pod_ready.go:83] waiting for pod "kube-apiserver-pause-088343" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:17:14.872334  202639 pod_ready.go:94] pod "kube-apiserver-pause-088343" is "Ready"
	I1126 20:17:14.872358  202639 pod_ready.go:86] duration metric: took 4.103783ms for pod "kube-apiserver-pause-088343" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:17:14.874270  202639 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-088343" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:17:15.241380  202639 pod_ready.go:94] pod "kube-controller-manager-pause-088343" is "Ready"
	I1126 20:17:15.241410  202639 pod_ready.go:86] duration metric: took 367.123406ms for pod "kube-controller-manager-pause-088343" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:17:15.440642  202639 pod_ready.go:83] waiting for pod "kube-proxy-j4rc4" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:17:15.840897  202639 pod_ready.go:94] pod "kube-proxy-j4rc4" is "Ready"
	I1126 20:17:15.840922  202639 pod_ready.go:86] duration metric: took 400.257489ms for pod "kube-proxy-j4rc4" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:17:16.040420  202639 pod_ready.go:83] waiting for pod "kube-scheduler-pause-088343" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:17:16.439428  202639 pod_ready.go:94] pod "kube-scheduler-pause-088343" is "Ready"
	I1126 20:17:16.439452  202639 pod_ready.go:86] duration metric: took 399.006281ms for pod "kube-scheduler-pause-088343" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:17:16.439474  202639 pod_ready.go:40] duration metric: took 1.603245156s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 20:17:16.482690  202639 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1126 20:17:16.513906  202639 out.go:179] * Done! kubectl is now configured to use "pause-088343" cluster and "default" namespace by default
	I1126 20:17:12.493940  204288 out.go:235] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1126 20:17:12.494232  204288 start.go:159] libmachine.API.Create for "stopped-upgrade-211103" (driver="docker")
	I1126 20:17:12.494265  204288 client.go:168] LocalClient.Create starting
	I1126 20:17:12.494354  204288 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem
	I1126 20:17:12.494391  204288 main.go:141] libmachine: Decoding PEM data...
	I1126 20:17:12.494408  204288 main.go:141] libmachine: Parsing certificate...
	I1126 20:17:12.494495  204288 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem
	I1126 20:17:12.494522  204288 main.go:141] libmachine: Decoding PEM data...
	I1126 20:17:12.494533  204288 main.go:141] libmachine: Parsing certificate...
	I1126 20:17:12.495000  204288 cli_runner.go:164] Run: docker network inspect stopped-upgrade-211103 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1126 20:17:12.514669  204288 cli_runner.go:211] docker network inspect stopped-upgrade-211103 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1126 20:17:12.514750  204288 network_create.go:284] running [docker network inspect stopped-upgrade-211103] to gather additional debugging logs...
	I1126 20:17:12.514770  204288 cli_runner.go:164] Run: docker network inspect stopped-upgrade-211103
	W1126 20:17:12.533239  204288 cli_runner.go:211] docker network inspect stopped-upgrade-211103 returned with exit code 1
	I1126 20:17:12.533268  204288 network_create.go:287] error running [docker network inspect stopped-upgrade-211103]: docker network inspect stopped-upgrade-211103: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network stopped-upgrade-211103 not found
	I1126 20:17:12.533295  204288 network_create.go:289] output of [docker network inspect stopped-upgrade-211103]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network stopped-upgrade-211103 not found
	
	** /stderr **
	I1126 20:17:12.533470  204288 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1126 20:17:12.552550  204288 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9f2fbfec5d4b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:aa:65:0e:d3:0d:13} reservation:<nil>}
	I1126 20:17:12.553176  204288 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bbc6aaddce08 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3a:93:ea:3f:0d:7e} reservation:<nil>}
	I1126 20:17:12.553819  204288 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ecd673f5b2f2 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:de:d0:57:1a:06:91} reservation:<nil>}
	I1126 20:17:12.554411  204288 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-53ea54025484 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:1e:01:df:bb:7d:a4} reservation:<nil>}
	I1126 20:17:12.555252  204288 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e15870}
	I1126 20:17:12.555276  204288 network_create.go:124] attempt to create docker network stopped-upgrade-211103 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1126 20:17:12.555332  204288 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=stopped-upgrade-211103 stopped-upgrade-211103
	I1126 20:17:12.605801  204288 network_create.go:108] docker network stopped-upgrade-211103 192.168.85.0/24 created
	I1126 20:17:12.605847  204288 kic.go:121] calculated static IP "192.168.85.2" for the "stopped-upgrade-211103" container
	I1126 20:17:12.605924  204288 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1126 20:17:12.630700  204288 cli_runner.go:164] Run: docker volume create stopped-upgrade-211103 --label name.minikube.sigs.k8s.io=stopped-upgrade-211103 --label created_by.minikube.sigs.k8s.io=true
	I1126 20:17:12.647729  204288 oci.go:103] Successfully created a docker volume stopped-upgrade-211103
	I1126 20:17:12.647840  204288 cli_runner.go:164] Run: docker run --rm --name stopped-upgrade-211103-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=stopped-upgrade-211103 --entrypoint /usr/bin/test -v stopped-upgrade-211103:/var gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -d /var/lib
	I1126 20:17:13.584010  204288 oci.go:107] Successfully prepared a docker volume stopped-upgrade-211103
	I1126 20:17:13.584074  204288 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1126 20:17:13.584102  204288 kic.go:194] Starting extracting preloaded images to volume ...
	I1126 20:17:13.584180  204288 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21974-10722/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v stopped-upgrade-211103:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir
	I1126 20:17:15.489446  201907 out.go:252]   - Generating certificates and keys ...
	I1126 20:17:15.489556  201907 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1126 20:17:15.489655  201907 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1126 20:17:15.949935  201907 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1126 20:17:16.247637  201907 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1126 20:17:16.671156  201907 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1126 20:17:16.827309  201907 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1126 20:17:16.973201  201907 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1126 20:17:16.973383  201907 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-225144 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1126 20:17:17.120393  201907 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1126 20:17:17.121375  201907 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-225144 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1126 20:17:17.618562  201907 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1126 20:17:17.711260  201907 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1126 20:17:17.779416  201907 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1126 20:17:17.779533  201907 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1126 20:17:17.919024  201907 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1126 20:17:18.148942  201907 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1126 20:17:18.243823  201907 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1126 20:17:18.384814  201907 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1126 20:17:18.385539  201907 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1126 20:17:18.390873  201907 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1126 20:17:13.849678  205232 delete.go:124] DEMOLISHING missing-upgrade-521324 ...
	I1126 20:17:13.849758  205232 cli_runner.go:164] Run: docker container inspect missing-upgrade-521324 --format={{.State.Status}}
	W1126 20:17:13.869593  205232 cli_runner.go:211] docker container inspect missing-upgrade-521324 --format={{.State.Status}} returned with exit code 1
	W1126 20:17:13.869651  205232 stop.go:83] unable to get state: unknown state "missing-upgrade-521324": docker container inspect missing-upgrade-521324 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-521324
	I1126 20:17:13.869669  205232 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "missing-upgrade-521324": docker container inspect missing-upgrade-521324 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-521324
	I1126 20:17:13.870094  205232 cli_runner.go:164] Run: docker container inspect missing-upgrade-521324 --format={{.State.Status}}
	W1126 20:17:13.889643  205232 cli_runner.go:211] docker container inspect missing-upgrade-521324 --format={{.State.Status}} returned with exit code 1
	I1126 20:17:13.889717  205232 delete.go:82] Unable to get host status for missing-upgrade-521324, assuming it has already been deleted: state: unknown state "missing-upgrade-521324": docker container inspect missing-upgrade-521324 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-521324
	I1126 20:17:13.889780  205232 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-521324
	W1126 20:17:13.911626  205232 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-521324 returned with exit code 1
	I1126 20:17:13.911665  205232 kic.go:371] could not find the container missing-upgrade-521324 to remove it. will try anyways
	I1126 20:17:13.911710  205232 cli_runner.go:164] Run: docker container inspect missing-upgrade-521324 --format={{.State.Status}}
	W1126 20:17:13.934683  205232 cli_runner.go:211] docker container inspect missing-upgrade-521324 --format={{.State.Status}} returned with exit code 1
	W1126 20:17:13.934736  205232 oci.go:84] error getting container status, will try to delete anyways: unknown state "missing-upgrade-521324": docker container inspect missing-upgrade-521324 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-521324
	I1126 20:17:13.934787  205232 cli_runner.go:164] Run: docker exec --privileged -t missing-upgrade-521324 /bin/bash -c "sudo init 0"
	W1126 20:17:13.954899  205232 cli_runner.go:211] docker exec --privileged -t missing-upgrade-521324 /bin/bash -c "sudo init 0" returned with exit code 1
	I1126 20:17:13.954933  205232 oci.go:659] error shutdown missing-upgrade-521324: docker exec --privileged -t missing-upgrade-521324 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-521324
	I1126 20:17:14.955644  205232 cli_runner.go:164] Run: docker container inspect missing-upgrade-521324 --format={{.State.Status}}
	W1126 20:17:14.974447  205232 cli_runner.go:211] docker container inspect missing-upgrade-521324 --format={{.State.Status}} returned with exit code 1
	I1126 20:17:14.974519  205232 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-521324": docker container inspect missing-upgrade-521324 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-521324
	I1126 20:17:14.974532  205232 oci.go:673] temporary error: container missing-upgrade-521324 status is  but expect it to be exited
	I1126 20:17:14.974568  205232 retry.go:31] will retry after 255.124429ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-521324": docker container inspect missing-upgrade-521324 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-521324
	I1126 20:17:15.229853  205232 cli_runner.go:164] Run: docker container inspect missing-upgrade-521324 --format={{.State.Status}}
	W1126 20:17:15.248596  205232 cli_runner.go:211] docker container inspect missing-upgrade-521324 --format={{.State.Status}} returned with exit code 1
	I1126 20:17:15.248646  205232 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-521324": docker container inspect missing-upgrade-521324 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-521324
	I1126 20:17:15.248673  205232 oci.go:673] temporary error: container missing-upgrade-521324 status is  but expect it to be exited
	I1126 20:17:15.248705  205232 retry.go:31] will retry after 565.430892ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-521324": docker container inspect missing-upgrade-521324 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-521324
	I1126 20:17:15.814472  205232 cli_runner.go:164] Run: docker container inspect missing-upgrade-521324 --format={{.State.Status}}
	W1126 20:17:15.834014  205232 cli_runner.go:211] docker container inspect missing-upgrade-521324 --format={{.State.Status}} returned with exit code 1
	I1126 20:17:15.834094  205232 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-521324": docker container inspect missing-upgrade-521324 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-521324
	I1126 20:17:15.834109  205232 oci.go:673] temporary error: container missing-upgrade-521324 status is  but expect it to be exited
	I1126 20:17:15.834136  205232 retry.go:31] will retry after 944.306376ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-521324": docker container inspect missing-upgrade-521324 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-521324
	I1126 20:17:16.778621  205232 cli_runner.go:164] Run: docker container inspect missing-upgrade-521324 --format={{.State.Status}}
	W1126 20:17:16.796722  205232 cli_runner.go:211] docker container inspect missing-upgrade-521324 --format={{.State.Status}} returned with exit code 1
	I1126 20:17:16.796782  205232 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-521324": docker container inspect missing-upgrade-521324 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-521324
	I1126 20:17:16.796797  205232 oci.go:673] temporary error: container missing-upgrade-521324 status is  but expect it to be exited
	I1126 20:17:16.796831  205232 retry.go:31] will retry after 1.884179029s: couldn't verify container is exited. %v: unknown state "missing-upgrade-521324": docker container inspect missing-upgrade-521324 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-521324
	
	
	==> CRI-O <==
	Nov 26 20:17:13 pause-088343 crio[2181]: time="2025-11-26T20:17:13.266234331Z" level=info msg="RDT not available in the host system"
	Nov 26 20:17:13 pause-088343 crio[2181]: time="2025-11-26T20:17:13.266252272Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 26 20:17:13 pause-088343 crio[2181]: time="2025-11-26T20:17:13.267099923Z" level=info msg="Conmon does support the --sync option"
	Nov 26 20:17:13 pause-088343 crio[2181]: time="2025-11-26T20:17:13.267120335Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 26 20:17:13 pause-088343 crio[2181]: time="2025-11-26T20:17:13.267138402Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 26 20:17:13 pause-088343 crio[2181]: time="2025-11-26T20:17:13.267847638Z" level=info msg="Conmon does support the --sync option"
	Nov 26 20:17:13 pause-088343 crio[2181]: time="2025-11-26T20:17:13.267863609Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 26 20:17:13 pause-088343 crio[2181]: time="2025-11-26T20:17:13.271731715Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 26 20:17:13 pause-088343 crio[2181]: time="2025-11-26T20:17:13.271756742Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 26 20:17:13 pause-088343 crio[2181]: time="2025-11-26T20:17:13.272312046Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/
cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"
/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Nov 26 20:17:13 pause-088343 crio[2181]: time="2025-11-26T20:17:13.272678862Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Nov 26 20:17:13 pause-088343 crio[2181]: time="2025-11-26T20:17:13.272722101Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Nov 26 20:17:13 pause-088343 crio[2181]: time="2025-11-26T20:17:13.358358438Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-npkd9 Namespace:kube-system ID:45137e979e355bffa133eac88dfd24d461e4290f2a744bf74ef0704290d171e8 UID:4e96858c-42d7-4bb9-a5a9-252f2585bf9b NetNS:/var/run/netns/3c411218-1f43-4fb6-a8ee-70e167d20a57 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008b0f0}] Aliases:map[]}"
	Nov 26 20:17:13 pause-088343 crio[2181]: time="2025-11-26T20:17:13.35863388Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-npkd9 for CNI network kindnet (type=ptp)"
	Nov 26 20:17:13 pause-088343 crio[2181]: time="2025-11-26T20:17:13.3591271Z" level=info msg="Registered SIGHUP reload watcher"
	Nov 26 20:17:13 pause-088343 crio[2181]: time="2025-11-26T20:17:13.359167873Z" level=info msg="Starting seccomp notifier watcher"
	Nov 26 20:17:13 pause-088343 crio[2181]: time="2025-11-26T20:17:13.359237706Z" level=info msg="Create NRI interface"
	Nov 26 20:17:13 pause-088343 crio[2181]: time="2025-11-26T20:17:13.359922865Z" level=info msg="built-in NRI default validator is disabled"
	Nov 26 20:17:13 pause-088343 crio[2181]: time="2025-11-26T20:17:13.359961017Z" level=info msg="runtime interface created"
	Nov 26 20:17:13 pause-088343 crio[2181]: time="2025-11-26T20:17:13.359976871Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Nov 26 20:17:13 pause-088343 crio[2181]: time="2025-11-26T20:17:13.359985371Z" level=info msg="runtime interface starting up..."
	Nov 26 20:17:13 pause-088343 crio[2181]: time="2025-11-26T20:17:13.359997553Z" level=info msg="starting plugins..."
	Nov 26 20:17:13 pause-088343 crio[2181]: time="2025-11-26T20:17:13.360014095Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Nov 26 20:17:13 pause-088343 crio[2181]: time="2025-11-26T20:17:13.36063018Z" level=info msg="No systemd watchdog enabled"
	Nov 26 20:17:13 pause-088343 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	23efe5ed2d512       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   13 seconds ago       Running             coredns                   0                   45137e979e355       coredns-66bc5c9577-npkd9               kube-system
	ded1aa6ae8ad4       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   55 seconds ago       Running             kube-proxy                0                   ea48e63076da1       kube-proxy-j4rc4                       kube-system
	08f40eeb695e6       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   55 seconds ago       Running             kindnet-cni               0                   309d7385e9ca1       kindnet-s6tf4                          kube-system
	38113599f6742       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   About a minute ago   Running             etcd                      0                   59dc44ee95346       etcd-pause-088343                      kube-system
	8fff4af24a894       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   About a minute ago   Running             kube-controller-manager   0                   a4bd719fef483       kube-controller-manager-pause-088343   kube-system
	6ec60134d42fa       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   About a minute ago   Running             kube-scheduler            0                   43e91e1d13dd2       kube-scheduler-pause-088343            kube-system
	77ab3602e99c5       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   About a minute ago   Running             kube-apiserver            0                   b293cedf7b086       kube-apiserver-pause-088343            kube-system
	
	
	==> coredns [23efe5ed2d5129e2520b991840514eab0b95ecff061cdeed4dce4a50852925f9] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51119 - 16281 "HINFO IN 5625130158383906075.1338709804454349835. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.124283077s
	
	
	==> describe nodes <==
	Name:               pause-088343
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-088343
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970
	                    minikube.k8s.io/name=pause-088343
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_26T20_16_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 26 Nov 2025 20:16:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-088343
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 26 Nov 2025 20:17:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 26 Nov 2025 20:17:05 +0000   Wed, 26 Nov 2025 20:16:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 26 Nov 2025 20:17:05 +0000   Wed, 26 Nov 2025 20:16:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 26 Nov 2025 20:17:05 +0000   Wed, 26 Nov 2025 20:16:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 26 Nov 2025 20:17:05 +0000   Wed, 26 Nov 2025 20:17:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-088343
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                5f3e1164-b5cf-4da8-831d-d2b903341fe7
	  Boot ID:                    4ab83601-3d95-4490-96bc-14b416ba2714
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-npkd9                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     56s
	  kube-system                 etcd-pause-088343                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         61s
	  kube-system                 kindnet-s6tf4                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      56s
	  kube-system                 kube-apiserver-pause-088343             250m (3%)     0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-controller-manager-pause-088343    200m (2%)     0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-proxy-j4rc4                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kube-system                 kube-scheduler-pause-088343             100m (1%)     0 (0%)      0 (0%)           0 (0%)         61s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 55s   kube-proxy       
	  Normal  Starting                 62s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  61s   kubelet          Node pause-088343 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s   kubelet          Node pause-088343 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s   kubelet          Node pause-088343 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           57s   node-controller  Node pause-088343 event: Registered Node pause-088343 in Controller
	  Normal  NodeReady                15s   kubelet          Node pause-088343 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.091832] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023778] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.010162] kauditd_printk_skb: 47 callbacks suppressed
	[Nov26 19:37] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.011177] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.022896] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.024869] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.022915] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.023864] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +2.047793] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +4.032568] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +8.126198] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[ +16.382331] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[Nov26 19:38] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	
	
	==> etcd [38113599f674298b9b3fea5b2a7c1e4c46276f1c818aa294ebd11f463d2b5429] <==
	{"level":"warn","ts":"2025-11-26T20:16:15.348910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:16:15.350016Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:16:15.361307Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:16:15.368153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:16:15.389745Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:16:15.410717Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:16:15.419614Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:16:15.429061Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:16:15.444036Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:16:15.455842Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:16:15.481759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:16:15.507799Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:16:15.523502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:16:15.540878Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:16:15.550776Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:16:15.570702Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:16:15.665582Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38430","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-26T20:16:38.136799Z","caller":"traceutil/trace.go:172","msg":"trace[1534011608] transaction","detail":"{read_only:false; response_revision:388; number_of_response:1; }","duration":"127.927811ms","start":"2025-11-26T20:16:38.008850Z","end":"2025-11-26T20:16:38.136777Z","steps":["trace[1534011608] 'process raft request'  (duration: 64.33728ms)","trace[1534011608] 'compare'  (duration: 63.48434ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-26T20:16:47.460000Z","caller":"traceutil/trace.go:172","msg":"trace[554231108] transaction","detail":"{read_only:false; response_revision:390; number_of_response:1; }","duration":"198.993369ms","start":"2025-11-26T20:16:47.260989Z","end":"2025-11-26T20:16:47.459983Z","steps":["trace[554231108] 'process raft request'  (duration: 198.866517ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-26T20:17:07.957246Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"100.271189ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-26T20:17:07.957325Z","caller":"traceutil/trace.go:172","msg":"trace[1012265623] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:414; }","duration":"100.36965ms","start":"2025-11-26T20:17:07.856938Z","end":"2025-11-26T20:17:07.957307Z","steps":["trace[1012265623] 'range keys from in-memory index tree'  (duration: 100.212202ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-26T20:17:10.107996Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"148.407726ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-26T20:17:10.108308Z","caller":"traceutil/trace.go:172","msg":"trace[1754885902] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:416; }","duration":"148.722482ms","start":"2025-11-26T20:17:09.959570Z","end":"2025-11-26T20:17:10.108292Z","steps":["trace[1754885902] 'range keys from in-memory index tree'  (duration: 148.348531ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-26T20:17:10.107997Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"150.701975ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-26T20:17:10.108509Z","caller":"traceutil/trace.go:172","msg":"trace[1721702882] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:416; }","duration":"151.245582ms","start":"2025-11-26T20:17:09.957243Z","end":"2025-11-26T20:17:10.108489Z","steps":["trace[1721702882] 'range keys from in-memory index tree'  (duration: 150.685559ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:17:20 up 59 min,  0 user,  load average: 4.20, 2.11, 1.33
	Linux pause-088343 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [08f40eeb695e63781aee8a6dd64716e465781b63c96d5ecc21e5a1895bc0c4a0] <==
	I1126 20:16:24.931004       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1126 20:16:24.931305       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1126 20:16:24.932945       1 main.go:148] setting mtu 1500 for CNI 
	I1126 20:16:24.933150       1 main.go:178] kindnetd IP family: "ipv4"
	I1126 20:16:24.933207       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-26T20:16:25Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1126 20:16:25.307565       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1126 20:16:25.307637       1 controller.go:381] "Waiting for informer caches to sync"
	I1126 20:16:25.307671       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1126 20:16:25.307823       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1126 20:16:55.232382       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1126 20:16:55.232380       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1126 20:16:55.232382       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1126 20:16:55.232539       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1126 20:16:56.508301       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1126 20:16:56.508330       1 metrics.go:72] Registering metrics
	I1126 20:16:56.508402       1 controller.go:711] "Syncing nftables rules"
	I1126 20:17:05.231091       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1126 20:17:05.231131       1 main.go:301] handling current node
	I1126 20:17:15.230730       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1126 20:17:15.230767       1 main.go:301] handling current node
	
	
	==> kube-apiserver [77ab3602e99c51dd1062dba3be62901ab816c07d84dd7894b76a2f95e108f6c0] <==
	I1126 20:16:16.431754       1 policy_source.go:240] refreshing policies
	E1126 20:16:16.433763       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1126 20:16:16.481444       1 controller.go:667] quota admission added evaluator for: namespaces
	I1126 20:16:16.483941       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1126 20:16:16.485429       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1126 20:16:16.490916       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1126 20:16:16.491398       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1126 20:16:16.613245       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1126 20:16:17.278214       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1126 20:16:17.282160       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1126 20:16:17.282178       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1126 20:16:17.798308       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1126 20:16:17.840923       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1126 20:16:17.986507       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1126 20:16:17.995103       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1126 20:16:17.996119       1 controller.go:667] quota admission added evaluator for: endpoints
	I1126 20:16:18.000575       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1126 20:16:18.342855       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1126 20:16:18.967872       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1126 20:16:18.978026       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1126 20:16:18.988413       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1126 20:16:23.743219       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1126 20:16:24.144672       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1126 20:16:24.530416       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1126 20:16:24.541962       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [8fff4af24a8946fc7daabaa1caacec5d7deeaf3f92227b0f6671f78b34598cb6] <==
	I1126 20:16:23.333341       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1126 20:16:23.338205       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1126 20:16:23.339444       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1126 20:16:23.339753       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1126 20:16:23.339814       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1126 20:16:23.339903       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1126 20:16:23.339949       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1126 20:16:23.340030       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1126 20:16:23.340071       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1126 20:16:23.340107       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1126 20:16:23.340173       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1126 20:16:23.340228       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1126 20:16:23.341298       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1126 20:16:23.342359       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1126 20:16:23.342798       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1126 20:16:23.342820       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1126 20:16:23.342827       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1126 20:16:23.343939       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1126 20:16:23.344399       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1126 20:16:23.345642       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1126 20:16:23.345711       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1126 20:16:23.348994       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1126 20:16:23.353254       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1126 20:16:23.357560       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1126 20:17:08.285081       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [ded1aa6ae8ad4b032f860f18ec98c2321c1aecc1632c746bf85cfd4786777f49] <==
	I1126 20:16:24.771569       1 server_linux.go:53] "Using iptables proxy"
	I1126 20:16:24.876203       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1126 20:16:24.976649       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1126 20:16:24.977095       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1126 20:16:24.977215       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1126 20:16:25.021078       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1126 20:16:25.021144       1 server_linux.go:132] "Using iptables Proxier"
	I1126 20:16:25.028843       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1126 20:16:25.029605       1 server.go:527] "Version info" version="v1.34.1"
	I1126 20:16:25.029927       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 20:16:25.032415       1 config.go:200] "Starting service config controller"
	I1126 20:16:25.033776       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1126 20:16:25.033048       1 config.go:403] "Starting serviceCIDR config controller"
	I1126 20:16:25.033929       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1126 20:16:25.033277       1 config.go:309] "Starting node config controller"
	I1126 20:16:25.034019       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1126 20:16:25.034140       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1126 20:16:25.033020       1 config.go:106] "Starting endpoint slice config controller"
	I1126 20:16:25.035206       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1126 20:16:25.134303       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1126 20:16:25.134312       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1126 20:16:25.135409       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [6ec60134d42facb0d523b0d825b2d62cae44913de8328d8a2ed7b3bcba96b481] <==
	E1126 20:16:16.429553       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1126 20:16:16.429567       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1126 20:16:16.429604       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1126 20:16:16.429644       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1126 20:16:16.429660       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1126 20:16:16.429735       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1126 20:16:16.429814       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1126 20:16:16.430449       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1126 20:16:16.430502       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1126 20:16:16.430567       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1126 20:16:16.430597       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1126 20:16:16.430595       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1126 20:16:16.430744       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1126 20:16:17.245485       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1126 20:16:17.345644       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1126 20:16:17.350675       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1126 20:16:17.383897       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1126 20:16:17.430878       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1126 20:16:17.433852       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1126 20:16:17.459931       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1126 20:16:17.479918       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1126 20:16:17.490939       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1126 20:16:17.518977       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1126 20:16:17.551351       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I1126 20:16:19.327046       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 26 20:16:19 pause-088343 kubelet[1291]: I1126 20:16:19.935057    1291 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-088343" podStartSLOduration=0.935036237 podStartE2EDuration="935.036237ms" podCreationTimestamp="2025-11-26 20:16:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 20:16:19.916656529 +0000 UTC m=+1.146136834" watchObservedRunningTime="2025-11-26 20:16:19.935036237 +0000 UTC m=+1.164516535"
	Nov 26 20:16:19 pause-088343 kubelet[1291]: I1126 20:16:19.956364    1291 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-088343" podStartSLOduration=0.956322559 podStartE2EDuration="956.322559ms" podCreationTimestamp="2025-11-26 20:16:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 20:16:19.935370331 +0000 UTC m=+1.164850633" watchObservedRunningTime="2025-11-26 20:16:19.956322559 +0000 UTC m=+1.185802857"
	Nov 26 20:16:19 pause-088343 kubelet[1291]: I1126 20:16:19.964446    1291 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 26 20:16:19 pause-088343 kubelet[1291]: I1126 20:16:19.980840    1291 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-088343" podStartSLOduration=0.980820585 podStartE2EDuration="980.820585ms" podCreationTimestamp="2025-11-26 20:16:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 20:16:19.959204851 +0000 UTC m=+1.188685155" watchObservedRunningTime="2025-11-26 20:16:19.980820585 +0000 UTC m=+1.210300885"
	Nov 26 20:16:23 pause-088343 kubelet[1291]: I1126 20:16:23.357913    1291 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 26 20:16:23 pause-088343 kubelet[1291]: I1126 20:16:23.358699    1291 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 26 20:16:24 pause-088343 kubelet[1291]: I1126 20:16:24.198182    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/48867150-fa26-4bb5-91d9-91a5d5d2f6ee-lib-modules\") pod \"kindnet-s6tf4\" (UID: \"48867150-fa26-4bb5-91d9-91a5d5d2f6ee\") " pod="kube-system/kindnet-s6tf4"
	Nov 26 20:16:24 pause-088343 kubelet[1291]: I1126 20:16:24.198226    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7036e338-edf1-43c2-b5b3-213e285bdd62-xtables-lock\") pod \"kube-proxy-j4rc4\" (UID: \"7036e338-edf1-43c2-b5b3-213e285bdd62\") " pod="kube-system/kube-proxy-j4rc4"
	Nov 26 20:16:24 pause-088343 kubelet[1291]: I1126 20:16:24.198251    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gncrd\" (UniqueName: \"kubernetes.io/projected/48867150-fa26-4bb5-91d9-91a5d5d2f6ee-kube-api-access-gncrd\") pod \"kindnet-s6tf4\" (UID: \"48867150-fa26-4bb5-91d9-91a5d5d2f6ee\") " pod="kube-system/kindnet-s6tf4"
	Nov 26 20:16:24 pause-088343 kubelet[1291]: I1126 20:16:24.198277    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7036e338-edf1-43c2-b5b3-213e285bdd62-kube-proxy\") pod \"kube-proxy-j4rc4\" (UID: \"7036e338-edf1-43c2-b5b3-213e285bdd62\") " pod="kube-system/kube-proxy-j4rc4"
	Nov 26 20:16:24 pause-088343 kubelet[1291]: I1126 20:16:24.198296    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7036e338-edf1-43c2-b5b3-213e285bdd62-lib-modules\") pod \"kube-proxy-j4rc4\" (UID: \"7036e338-edf1-43c2-b5b3-213e285bdd62\") " pod="kube-system/kube-proxy-j4rc4"
	Nov 26 20:16:24 pause-088343 kubelet[1291]: I1126 20:16:24.198315    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fml7q\" (UniqueName: \"kubernetes.io/projected/7036e338-edf1-43c2-b5b3-213e285bdd62-kube-api-access-fml7q\") pod \"kube-proxy-j4rc4\" (UID: \"7036e338-edf1-43c2-b5b3-213e285bdd62\") " pod="kube-system/kube-proxy-j4rc4"
	Nov 26 20:16:24 pause-088343 kubelet[1291]: I1126 20:16:24.198348    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/48867150-fa26-4bb5-91d9-91a5d5d2f6ee-cni-cfg\") pod \"kindnet-s6tf4\" (UID: \"48867150-fa26-4bb5-91d9-91a5d5d2f6ee\") " pod="kube-system/kindnet-s6tf4"
	Nov 26 20:16:24 pause-088343 kubelet[1291]: I1126 20:16:24.198404    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/48867150-fa26-4bb5-91d9-91a5d5d2f6ee-xtables-lock\") pod \"kindnet-s6tf4\" (UID: \"48867150-fa26-4bb5-91d9-91a5d5d2f6ee\") " pod="kube-system/kindnet-s6tf4"
	Nov 26 20:16:24 pause-088343 kubelet[1291]: I1126 20:16:24.964112    1291 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-j4rc4" podStartSLOduration=0.96408687 podStartE2EDuration="964.08687ms" podCreationTimestamp="2025-11-26 20:16:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 20:16:24.962221665 +0000 UTC m=+6.191701966" watchObservedRunningTime="2025-11-26 20:16:24.96408687 +0000 UTC m=+6.193567172"
	Nov 26 20:16:24 pause-088343 kubelet[1291]: I1126 20:16:24.979437    1291 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-s6tf4" podStartSLOduration=0.979417555 podStartE2EDuration="979.417555ms" podCreationTimestamp="2025-11-26 20:16:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 20:16:24.978922537 +0000 UTC m=+6.208402839" watchObservedRunningTime="2025-11-26 20:16:24.979417555 +0000 UTC m=+6.208897855"
	Nov 26 20:17:05 pause-088343 kubelet[1291]: I1126 20:17:05.758429    1291 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 26 20:17:05 pause-088343 kubelet[1291]: I1126 20:17:05.809053    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e96858c-42d7-4bb9-a5a9-252f2585bf9b-config-volume\") pod \"coredns-66bc5c9577-npkd9\" (UID: \"4e96858c-42d7-4bb9-a5a9-252f2585bf9b\") " pod="kube-system/coredns-66bc5c9577-npkd9"
	Nov 26 20:17:05 pause-088343 kubelet[1291]: I1126 20:17:05.809097    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9cf8\" (UniqueName: \"kubernetes.io/projected/4e96858c-42d7-4bb9-a5a9-252f2585bf9b-kube-api-access-s9cf8\") pod \"coredns-66bc5c9577-npkd9\" (UID: \"4e96858c-42d7-4bb9-a5a9-252f2585bf9b\") " pod="kube-system/coredns-66bc5c9577-npkd9"
	Nov 26 20:17:07 pause-088343 kubelet[1291]: I1126 20:17:07.060978    1291 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-npkd9" podStartSLOduration=43.060957689 podStartE2EDuration="43.060957689s" podCreationTimestamp="2025-11-26 20:16:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 20:17:07.048604501 +0000 UTC m=+48.278084804" watchObservedRunningTime="2025-11-26 20:17:07.060957689 +0000 UTC m=+48.290437990"
	Nov 26 20:17:17 pause-088343 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 26 20:17:17 pause-088343 kubelet[1291]: I1126 20:17:17.162565    1291 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 26 20:17:17 pause-088343 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 26 20:17:17 pause-088343 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 26 20:17:17 pause-088343 systemd[1]: kubelet.service: Consumed 2.238s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-088343 -n pause-088343
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-088343 -n pause-088343: exit status 2 (359.353075ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-088343 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-088343
helpers_test.go:243: (dbg) docker inspect pause-088343:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "19d374e09e092cc18ef2914c8812a160dc6483c533082a3d68792078892e0723",
	        "Created": "2025-11-26T20:16:03.801250454Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 185281,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-26T20:16:03.841698262Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/19d374e09e092cc18ef2914c8812a160dc6483c533082a3d68792078892e0723/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/19d374e09e092cc18ef2914c8812a160dc6483c533082a3d68792078892e0723/hostname",
	        "HostsPath": "/var/lib/docker/containers/19d374e09e092cc18ef2914c8812a160dc6483c533082a3d68792078892e0723/hosts",
	        "LogPath": "/var/lib/docker/containers/19d374e09e092cc18ef2914c8812a160dc6483c533082a3d68792078892e0723/19d374e09e092cc18ef2914c8812a160dc6483c533082a3d68792078892e0723-json.log",
	        "Name": "/pause-088343",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-088343:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-088343",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "19d374e09e092cc18ef2914c8812a160dc6483c533082a3d68792078892e0723",
	                "LowerDir": "/var/lib/docker/overlay2/66139dc8129f18048e737c32330c0854b27229245478eff75ae47a543b1b4ea6-init/diff:/var/lib/docker/overlay2/1661d775281cb7314a654558ee2c4e5880ffafb3a27a085032449cd60a68753a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/66139dc8129f18048e737c32330c0854b27229245478eff75ae47a543b1b4ea6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/66139dc8129f18048e737c32330c0854b27229245478eff75ae47a543b1b4ea6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/66139dc8129f18048e737c32330c0854b27229245478eff75ae47a543b1b4ea6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-088343",
	                "Source": "/var/lib/docker/volumes/pause-088343/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-088343",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-088343",
	                "name.minikube.sigs.k8s.io": "pause-088343",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "5984d0bc20a107bcd2cc7945ae887cc691d291c34ac31538f25817887a9337fd",
	            "SandboxKey": "/var/run/docker/netns/5984d0bc20a1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32973"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32974"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32977"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32975"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32976"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-088343": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "53ea54025484ac2c9df8cec28a4fbb6a8eb5da8d25f389978ebd3b8f51588cdb",
	                    "EndpointID": "acea68d82aff9a5b2467980fc663ff543304e9c393d23c254342c6a76a91ee9d",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "1a:05:75:24:f2:47",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-088343",
	                        "19d374e09e09"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-088343 -n pause-088343
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-088343 -n pause-088343: exit status 2 (396.012131ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-088343 logs -n 25
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p scheduled-stop-926822 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-926822       │ jenkins │ v1.37.0 │ 26 Nov 25 20:14 UTC │ 26 Nov 25 20:15 UTC │
	│ delete  │ -p scheduled-stop-926822                                                                                                                 │ scheduled-stop-926822       │ jenkins │ v1.37.0 │ 26 Nov 25 20:15 UTC │ 26 Nov 25 20:15 UTC │
	│ start   │ -p insufficient-storage-946161 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio                         │ insufficient-storage-946161 │ jenkins │ v1.37.0 │ 26 Nov 25 20:15 UTC │                     │
	│ delete  │ -p insufficient-storage-946161                                                                                                           │ insufficient-storage-946161 │ jenkins │ v1.37.0 │ 26 Nov 25 20:15 UTC │ 26 Nov 25 20:15 UTC │
	│ start   │ -p NoKubernetes-237154 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                            │ NoKubernetes-237154         │ jenkins │ v1.37.0 │ 26 Nov 25 20:15 UTC │                     │
	│ start   │ -p pause-088343 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-088343                │ jenkins │ v1.37.0 │ 26 Nov 25 20:15 UTC │ 26 Nov 25 20:17 UTC │
	│ start   │ -p force-systemd-env-093715 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                               │ force-systemd-env-093715    │ jenkins │ v1.37.0 │ 26 Nov 25 20:15 UTC │ 26 Nov 25 20:16 UTC │
	│ start   │ -p offline-crio-073078 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio                        │ offline-crio-073078         │ jenkins │ v1.37.0 │ 26 Nov 25 20:15 UTC │ 26 Nov 25 20:17 UTC │
	│ start   │ -p NoKubernetes-237154 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                    │ NoKubernetes-237154         │ jenkins │ v1.37.0 │ 26 Nov 25 20:15 UTC │ 26 Nov 25 20:16 UTC │
	│ start   │ -p NoKubernetes-237154 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-237154         │ jenkins │ v1.37.0 │ 26 Nov 25 20:16 UTC │ 26 Nov 25 20:16 UTC │
	│ delete  │ -p force-systemd-env-093715                                                                                                              │ force-systemd-env-093715    │ jenkins │ v1.37.0 │ 26 Nov 25 20:16 UTC │ 26 Nov 25 20:16 UTC │
	│ start   │ -p missing-upgrade-521324 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-521324      │ jenkins │ v1.35.0 │ 26 Nov 25 20:16 UTC │ 26 Nov 25 20:17 UTC │
	│ delete  │ -p NoKubernetes-237154                                                                                                                   │ NoKubernetes-237154         │ jenkins │ v1.37.0 │ 26 Nov 25 20:16 UTC │ 26 Nov 25 20:16 UTC │
	│ start   │ -p NoKubernetes-237154 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-237154         │ jenkins │ v1.37.0 │ 26 Nov 25 20:16 UTC │ 26 Nov 25 20:16 UTC │
	│ ssh     │ -p NoKubernetes-237154 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-237154         │ jenkins │ v1.37.0 │ 26 Nov 25 20:16 UTC │                     │
	│ stop    │ -p NoKubernetes-237154                                                                                                                   │ NoKubernetes-237154         │ jenkins │ v1.37.0 │ 26 Nov 25 20:16 UTC │ 26 Nov 25 20:16 UTC │
	│ start   │ -p NoKubernetes-237154 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-237154         │ jenkins │ v1.37.0 │ 26 Nov 25 20:16 UTC │ 26 Nov 25 20:17 UTC │
	│ ssh     │ -p NoKubernetes-237154 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-237154         │ jenkins │ v1.37.0 │ 26 Nov 25 20:17 UTC │                     │
	│ delete  │ -p NoKubernetes-237154                                                                                                                   │ NoKubernetes-237154         │ jenkins │ v1.37.0 │ 26 Nov 25 20:17 UTC │ 26 Nov 25 20:17 UTC │
	│ start   │ -p kubernetes-upgrade-225144 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-225144   │ jenkins │ v1.37.0 │ 26 Nov 25 20:17 UTC │                     │
	│ delete  │ -p offline-crio-073078                                                                                                                   │ offline-crio-073078         │ jenkins │ v1.37.0 │ 26 Nov 25 20:17 UTC │ 26 Nov 25 20:17 UTC │
	│ start   │ -p pause-088343 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-088343                │ jenkins │ v1.37.0 │ 26 Nov 25 20:17 UTC │ 26 Nov 25 20:17 UTC │
	│ start   │ -p stopped-upgrade-211103 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-211103      │ jenkins │ v1.35.0 │ 26 Nov 25 20:17 UTC │                     │
	│ start   │ -p missing-upgrade-521324 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-521324      │ jenkins │ v1.37.0 │ 26 Nov 25 20:17 UTC │                     │
	│ pause   │ -p pause-088343 --alsologtostderr -v=5                                                                                                   │ pause-088343                │ jenkins │ v1.37.0 │ 26 Nov 25 20:17 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/26 20:17:13
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1126 20:17:13.569544  205232 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:17:13.569670  205232 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:17:13.569682  205232 out.go:374] Setting ErrFile to fd 2...
	I1126 20:17:13.569689  205232 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:17:13.569994  205232 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-10722/.minikube/bin
	I1126 20:17:13.570526  205232 out.go:368] Setting JSON to false
	I1126 20:17:13.571802  205232 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3584,"bootTime":1764184650,"procs":282,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1126 20:17:13.571857  205232 start.go:143] virtualization: kvm guest
	I1126 20:17:13.573276  205232 out.go:179] * [missing-upgrade-521324] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1126 20:17:13.575259  205232 notify.go:221] Checking for updates...
	I1126 20:17:13.575268  205232 out.go:179]   - MINIKUBE_LOCATION=21974
	I1126 20:17:13.577133  205232 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1126 20:17:13.581011  205232 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21974-10722/kubeconfig
	I1126 20:17:13.582859  205232 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-10722/.minikube
	I1126 20:17:13.584848  205232 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1126 20:17:13.586223  205232 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1126 20:17:13.587893  205232 config.go:182] Loaded profile config "missing-upgrade-521324": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1126 20:17:13.589736  205232 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1126 20:17:13.594925  205232 driver.go:422] Setting default libvirt URI to qemu:///system
	I1126 20:17:13.622662  205232 docker.go:124] docker version: linux-29.0.4:Docker Engine - Community
	I1126 20:17:13.622774  205232 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:17:13.696037  205232 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:false NGoroutines:88 SystemTime:2025-11-26 20:17:13.684779089 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1126 20:17:13.696192  205232 docker.go:319] overlay module found
	I1126 20:17:13.698261  205232 out.go:179] * Using the docker driver based on existing profile
	I1126 20:17:13.699398  205232 start.go:309] selected driver: docker
	I1126 20:17:13.699412  205232 start.go:927] validating driver "docker" against &{Name:missing-upgrade-521324 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:missing-upgrade-521324 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:17:13.699582  205232 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1126 20:17:13.700339  205232 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:17:13.791247  205232 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:85 SystemTime:2025-11-26 20:17:13.774204325 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1126 20:17:13.791758  205232 cni.go:84] Creating CNI manager for ""
	I1126 20:17:13.791889  205232 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:17:13.791972  205232 start.go:353] cluster config:
	{Name:missing-upgrade-521324 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:missing-upgrade-521324 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:17:13.794945  205232 out.go:179] * Starting "missing-upgrade-521324" primary control-plane node in "missing-upgrade-521324" cluster
	I1126 20:17:13.795985  205232 cache.go:134] Beginning downloading kic base image for docker with crio
	I1126 20:17:13.797150  205232 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1126 20:17:13.798182  205232 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1126 20:17:13.798215  205232 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21974-10722/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1126 20:17:13.798250  205232 cache.go:65] Caching tarball of preloaded images
	I1126 20:17:13.798282  205232 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I1126 20:17:13.798381  205232 preload.go:238] Found /home/jenkins/minikube-integration/21974-10722/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1126 20:17:13.798396  205232 cache.go:68] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1126 20:17:13.798543  205232 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/missing-upgrade-521324/config.json ...
	I1126 20:17:13.825935  205232 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon, skipping pull
	I1126 20:17:13.825959  205232 cache.go:158] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in daemon, skipping load
	I1126 20:17:13.825977  205232 cache.go:243] Successfully downloaded all kic artifacts
	I1126 20:17:13.826024  205232 start.go:360] acquireMachinesLock for missing-upgrade-521324: {Name:mk63135e99d868a2faf91fd11fac0b75a0ab9998 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1126 20:17:13.826098  205232 start.go:364] duration metric: took 42.147µs to acquireMachinesLock for "missing-upgrade-521324"
	I1126 20:17:13.826126  205232 start.go:96] Skipping create...Using existing machine configuration
	I1126 20:17:13.826135  205232 fix.go:54] fixHost starting: 
	I1126 20:17:13.826419  205232 cli_runner.go:164] Run: docker container inspect missing-upgrade-521324 --format={{.State.Status}}
	W1126 20:17:13.847158  205232 cli_runner.go:211] docker container inspect missing-upgrade-521324 --format={{.State.Status}} returned with exit code 1
	I1126 20:17:13.847221  205232 fix.go:112] recreateIfNeeded on missing-upgrade-521324: state= err=unknown state "missing-upgrade-521324": docker container inspect missing-upgrade-521324 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-521324
	I1126 20:17:13.847267  205232 fix.go:117] machineExists: false. err=machine does not exist
	I1126 20:17:13.848522  205232 out.go:179] * docker "missing-upgrade-521324" container is missing, will recreate.
	I1126 20:17:10.315614  201907 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21974-10722/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-225144:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir: (4.999065472s)
	I1126 20:17:10.315648  201907 kic.go:203] duration metric: took 4.999269089s to extract preloaded images to volume ...
	W1126 20:17:10.315771  201907 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1126 20:17:10.315805  201907 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1126 20:17:10.315847  201907 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1126 20:17:10.380969  201907 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-225144 --name kubernetes-upgrade-225144 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-225144 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-225144 --network kubernetes-upgrade-225144 --ip 192.168.103.2 --volume kubernetes-upgrade-225144:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1126 20:17:10.684720  201907 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-225144 --format={{.State.Running}}
	I1126 20:17:10.703885  201907 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-225144 --format={{.State.Status}}
	I1126 20:17:10.729309  201907 cli_runner.go:164] Run: docker exec kubernetes-upgrade-225144 stat /var/lib/dpkg/alternatives/iptables
	I1126 20:17:10.779571  201907 oci.go:144] the created container "kubernetes-upgrade-225144" has a running status.
	I1126 20:17:10.779601  201907 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21974-10722/.minikube/machines/kubernetes-upgrade-225144/id_rsa...
	I1126 20:17:10.944022  201907 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21974-10722/.minikube/machines/kubernetes-upgrade-225144/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1126 20:17:10.976289  201907 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-225144 --format={{.State.Status}}
	I1126 20:17:11.005554  201907 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1126 20:17:11.005573  201907 kic_runner.go:114] Args: [docker exec --privileged kubernetes-upgrade-225144 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1126 20:17:11.068793  201907 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-225144 --format={{.State.Status}}
	I1126 20:17:11.091429  201907 machine.go:94] provisionDockerMachine start ...
	I1126 20:17:11.091563  201907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-225144
	I1126 20:17:11.113549  201907 main.go:143] libmachine: Using SSH client type: native
	I1126 20:17:11.113910  201907 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33008 <nil> <nil>}
	I1126 20:17:11.113933  201907 main.go:143] libmachine: About to run SSH command:
	hostname
	I1126 20:17:11.256921  201907 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-225144
	
	I1126 20:17:11.256946  201907 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-225144"
	I1126 20:17:11.257012  201907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-225144
	I1126 20:17:11.276997  201907 main.go:143] libmachine: Using SSH client type: native
	I1126 20:17:11.277197  201907 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33008 <nil> <nil>}
	I1126 20:17:11.277210  201907 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-225144 && echo "kubernetes-upgrade-225144" | sudo tee /etc/hostname
	I1126 20:17:11.427390  201907 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-225144
	
	I1126 20:17:11.427549  201907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-225144
	I1126 20:17:11.446892  201907 main.go:143] libmachine: Using SSH client type: native
	I1126 20:17:11.447178  201907 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33008 <nil> <nil>}
	I1126 20:17:11.447206  201907 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-225144' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-225144/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-225144' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1126 20:17:11.593738  201907 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1126 20:17:11.593765  201907 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21974-10722/.minikube CaCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21974-10722/.minikube}
	I1126 20:17:11.593791  201907 ubuntu.go:190] setting up certificates
	I1126 20:17:11.593801  201907 provision.go:84] configureAuth start
	I1126 20:17:11.593854  201907 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-225144
	I1126 20:17:11.612303  201907 provision.go:143] copyHostCerts
	I1126 20:17:11.612360  201907 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-10722/.minikube/ca.pem, removing ...
	I1126 20:17:11.612371  201907 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-10722/.minikube/ca.pem
	I1126 20:17:11.612429  201907 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/ca.pem (1078 bytes)
	I1126 20:17:11.612539  201907 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-10722/.minikube/cert.pem, removing ...
	I1126 20:17:11.612549  201907 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-10722/.minikube/cert.pem
	I1126 20:17:11.612578  201907 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/cert.pem (1123 bytes)
	I1126 20:17:11.612637  201907 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-10722/.minikube/key.pem, removing ...
	I1126 20:17:11.612647  201907 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-10722/.minikube/key.pem
	I1126 20:17:11.612682  201907 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/key.pem (1675 bytes)
	I1126 20:17:11.612745  201907 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-225144 san=[127.0.0.1 192.168.103.2 kubernetes-upgrade-225144 localhost minikube]
	I1126 20:17:11.816900  201907 provision.go:177] copyRemoteCerts
	I1126 20:17:11.816963  201907 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1126 20:17:11.817001  201907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-225144
	I1126 20:17:11.837271  201907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33008 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/kubernetes-upgrade-225144/id_rsa Username:docker}
	I1126 20:17:11.948703  201907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1126 20:17:11.971551  201907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1126 20:17:11.992582  201907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1126 20:17:12.013929  201907 provision.go:87] duration metric: took 420.113772ms to configureAuth
	I1126 20:17:12.013955  201907 ubuntu.go:206] setting minikube options for container-runtime
	I1126 20:17:12.014147  201907 config.go:182] Loaded profile config "kubernetes-upgrade-225144": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1126 20:17:12.014267  201907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-225144
	I1126 20:17:12.034862  201907 main.go:143] libmachine: Using SSH client type: native
	I1126 20:17:12.035164  201907 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33008 <nil> <nil>}
	I1126 20:17:12.035192  201907 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1126 20:17:12.346703  201907 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1126 20:17:12.346743  201907 machine.go:97] duration metric: took 1.255291253s to provisionDockerMachine
	I1126 20:17:12.346756  201907 client.go:176] duration metric: took 7.535982772s to LocalClient.Create
	I1126 20:17:12.346775  201907 start.go:167] duration metric: took 7.536044281s to libmachine.API.Create "kubernetes-upgrade-225144"
	I1126 20:17:12.346786  201907 start.go:293] postStartSetup for "kubernetes-upgrade-225144" (driver="docker")
	I1126 20:17:12.346798  201907 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1126 20:17:12.346889  201907 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1126 20:17:12.346941  201907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-225144
	I1126 20:17:12.369307  201907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33008 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/kubernetes-upgrade-225144/id_rsa Username:docker}
	I1126 20:17:12.477075  201907 ssh_runner.go:195] Run: cat /etc/os-release
	I1126 20:17:12.480763  201907 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1126 20:17:12.480799  201907 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1126 20:17:12.480811  201907 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-10722/.minikube/addons for local assets ...
	I1126 20:17:12.480856  201907 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-10722/.minikube/files for local assets ...
	I1126 20:17:12.480929  201907 filesync.go:149] local asset: /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem -> 142582.pem in /etc/ssl/certs
	I1126 20:17:12.481013  201907 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1126 20:17:12.489736  201907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem --> /etc/ssl/certs/142582.pem (1708 bytes)
	I1126 20:17:12.510830  201907 start.go:296] duration metric: took 164.029867ms for postStartSetup
	I1126 20:17:12.511191  201907 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-225144
	I1126 20:17:12.530789  201907 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kubernetes-upgrade-225144/config.json ...
	I1126 20:17:12.531059  201907 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1126 20:17:12.531109  201907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-225144
	I1126 20:17:12.550536  201907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33008 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/kubernetes-upgrade-225144/id_rsa Username:docker}
	I1126 20:17:12.648946  201907 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1126 20:17:12.653845  201907 start.go:128] duration metric: took 7.844914797s to createHost
	I1126 20:17:12.653872  201907 start.go:83] releasing machines lock for "kubernetes-upgrade-225144", held for 7.845050584s
	I1126 20:17:12.653944  201907 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-225144
	I1126 20:17:12.672283  201907 ssh_runner.go:195] Run: cat /version.json
	I1126 20:17:12.672343  201907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-225144
	I1126 20:17:12.672349  201907 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1126 20:17:12.672416  201907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-225144
	I1126 20:17:12.694371  201907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33008 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/kubernetes-upgrade-225144/id_rsa Username:docker}
	I1126 20:17:12.694691  201907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33008 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/kubernetes-upgrade-225144/id_rsa Username:docker}
	I1126 20:17:12.845598  201907 ssh_runner.go:195] Run: systemctl --version
	I1126 20:17:12.851624  201907 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1126 20:17:12.886589  201907 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1126 20:17:12.891533  201907 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1126 20:17:12.891612  201907 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1126 20:17:12.960655  201907 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1126 20:17:12.960679  201907 start.go:496] detecting cgroup driver to use...
	I1126 20:17:12.960712  201907 detect.go:190] detected "systemd" cgroup driver on host os
	I1126 20:17:12.960759  201907 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1126 20:17:12.978339  201907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1126 20:17:12.991736  201907 docker.go:218] disabling cri-docker service (if available) ...
	I1126 20:17:12.991807  201907 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1126 20:17:13.009419  201907 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1126 20:17:13.027088  201907 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1126 20:17:13.140907  201907 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1126 20:17:13.250479  201907 docker.go:234] disabling docker service ...
	I1126 20:17:13.250549  201907 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1126 20:17:13.271146  201907 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1126 20:17:13.284068  201907 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1126 20:17:13.377256  201907 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1126 20:17:13.485962  201907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1126 20:17:13.542746  201907 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1126 20:17:13.567386  201907 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1126 20:17:13.567441  201907 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:17:13.582517  201907 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1126 20:17:13.582576  201907 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:17:13.593612  201907 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:17:13.605021  201907 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:17:13.616059  201907 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1126 20:17:13.625938  201907 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:17:13.635895  201907 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:17:13.657569  201907 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:17:13.669769  201907 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1126 20:17:13.680409  201907 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1126 20:17:13.689086  201907 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:17:13.809381  201907 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1126 20:17:13.990747  201907 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1126 20:17:13.990811  201907 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1126 20:17:13.994662  201907 start.go:564] Will wait 60s for crictl version
	I1126 20:17:13.994719  201907 ssh_runner.go:195] Run: which crictl
	I1126 20:17:13.998246  201907 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1126 20:17:14.024637  201907 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1126 20:17:14.024723  201907 ssh_runner.go:195] Run: crio --version
	I1126 20:17:14.054496  201907 ssh_runner.go:195] Run: crio --version
	I1126 20:17:14.088566  201907 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.2 ...
	I1126 20:17:13.474943  202639 cli_runner.go:164] Run: docker network inspect pause-088343 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1126 20:17:13.494067  202639 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1126 20:17:13.498430  202639 kubeadm.go:884] updating cluster {Name:pause-088343 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-088343 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1126 20:17:13.498632  202639 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:17:13.498693  202639 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:17:13.567740  202639 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:17:13.568159  202639 crio.go:433] Images already preloaded, skipping extraction
	I1126 20:17:13.568244  202639 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:17:13.600551  202639 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:17:13.600580  202639 cache_images.go:86] Images are preloaded, skipping loading
	I1126 20:17:13.600606  202639 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1126 20:17:13.600745  202639 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-088343 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-088343 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1126 20:17:13.600826  202639 ssh_runner.go:195] Run: crio config
	I1126 20:17:13.665422  202639 cni.go:84] Creating CNI manager for ""
	I1126 20:17:13.665451  202639 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:17:13.665497  202639 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1126 20:17:13.665527  202639 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-088343 NodeName:pause-088343 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1126 20:17:13.665686  202639 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-088343"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1126 20:17:13.665767  202639 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1126 20:17:13.679656  202639 binaries.go:51] Found k8s binaries, skipping transfer
	I1126 20:17:13.679717  202639 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1126 20:17:13.689081  202639 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1126 20:17:13.703017  202639 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1126 20:17:13.717353  202639 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1126 20:17:13.738259  202639 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1126 20:17:13.744851  202639 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:17:13.888221  202639 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:17:13.903419  202639 certs.go:69] Setting up /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/pause-088343 for IP: 192.168.76.2
	I1126 20:17:13.903443  202639 certs.go:195] generating shared ca certs ...
	I1126 20:17:13.903559  202639 certs.go:227] acquiring lock for ca certs: {Name:mk4795cc985d88cb969cdd6a9c35d3c72f02dfea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:17:13.903748  202639 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21974-10722/.minikube/ca.key
	I1126 20:17:13.903811  202639 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.key
	I1126 20:17:13.903826  202639 certs.go:257] generating profile certs ...
	I1126 20:17:13.903950  202639 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/pause-088343/client.key
	I1126 20:17:13.904033  202639 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/pause-088343/apiserver.key.faa3837e
	I1126 20:17:13.904089  202639 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/pause-088343/proxy-client.key
	I1126 20:17:13.904269  202639 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/14258.pem (1338 bytes)
	W1126 20:17:13.904317  202639 certs.go:480] ignoring /home/jenkins/minikube-integration/21974-10722/.minikube/certs/14258_empty.pem, impossibly tiny 0 bytes
	I1126 20:17:13.904334  202639 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem (1679 bytes)
	I1126 20:17:13.904370  202639 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem (1078 bytes)
	I1126 20:17:13.904409  202639 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem (1123 bytes)
	I1126 20:17:13.904449  202639 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem (1675 bytes)
	I1126 20:17:13.904537  202639 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem (1708 bytes)
	I1126 20:17:13.905337  202639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1126 20:17:13.926411  202639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1126 20:17:13.949384  202639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1126 20:17:13.968745  202639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1126 20:17:13.989759  202639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/pause-088343/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1126 20:17:14.007915  202639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/pause-088343/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1126 20:17:14.027711  202639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/pause-088343/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1126 20:17:14.046791  202639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/pause-088343/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1126 20:17:14.065884  202639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem --> /usr/share/ca-certificates/142582.pem (1708 bytes)
	I1126 20:17:14.083958  202639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1126 20:17:14.104746  202639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/certs/14258.pem --> /usr/share/ca-certificates/14258.pem (1338 bytes)
	I1126 20:17:14.124264  202639 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1126 20:17:14.137893  202639 ssh_runner.go:195] Run: openssl version
	I1126 20:17:14.145707  202639 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142582.pem && ln -fs /usr/share/ca-certificates/142582.pem /etc/ssl/certs/142582.pem"
	I1126 20:17:14.156039  202639 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142582.pem
	I1126 20:17:14.160525  202639 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 26 19:41 /usr/share/ca-certificates/142582.pem
	I1126 20:17:14.160577  202639 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142582.pem
	I1126 20:17:14.204558  202639 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142582.pem /etc/ssl/certs/3ec20f2e.0"
	I1126 20:17:14.214470  202639 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1126 20:17:14.223376  202639 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:17:14.227253  202639 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 26 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:17:14.227303  202639 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:17:14.276386  202639 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1126 20:17:14.284156  202639 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14258.pem && ln -fs /usr/share/ca-certificates/14258.pem /etc/ssl/certs/14258.pem"
	I1126 20:17:14.293757  202639 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14258.pem
	I1126 20:17:14.297569  202639 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 26 19:41 /usr/share/ca-certificates/14258.pem
	I1126 20:17:14.297623  202639 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14258.pem
	I1126 20:17:14.089883  201907 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-225144 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1126 20:17:14.108037  201907 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1126 20:17:14.111984  201907 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:17:14.122171  201907 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-225144 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-225144 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1126 20:17:14.122285  201907 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1126 20:17:14.122335  201907 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:17:14.157344  201907 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:17:14.157367  201907 crio.go:433] Images already preloaded, skipping extraction
	I1126 20:17:14.157419  201907 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:17:14.184769  201907 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:17:14.184797  201907 cache_images.go:86] Images are preloaded, skipping loading
	I1126 20:17:14.184806  201907 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.28.0 crio true true} ...
	I1126 20:17:14.184924  201907 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kubernetes-upgrade-225144 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-225144 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1126 20:17:14.185011  201907 ssh_runner.go:195] Run: crio config
	I1126 20:17:14.245009  201907 cni.go:84] Creating CNI manager for ""
	I1126 20:17:14.245045  201907 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:17:14.245065  201907 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1126 20:17:14.245091  201907 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-225144 NodeName:kubernetes-upgrade-225144 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1126 20:17:14.245263  201907 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-225144"
	  kubeletExtraArgs:
	    node-ip: 192.168.103.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1126 20:17:14.245341  201907 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1126 20:17:14.253373  201907 binaries.go:51] Found k8s binaries, skipping transfer
	I1126 20:17:14.253434  201907 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1126 20:17:14.261180  201907 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I1126 20:17:14.273306  201907 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1126 20:17:14.289524  201907 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2165 bytes)
	I1126 20:17:14.303495  201907 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1126 20:17:14.307162  201907 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:17:14.316738  201907 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:17:14.409364  201907 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:17:14.435899  201907 certs.go:69] Setting up /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kubernetes-upgrade-225144 for IP: 192.168.103.2
	I1126 20:17:14.435917  201907 certs.go:195] generating shared ca certs ...
	I1126 20:17:14.435936  201907 certs.go:227] acquiring lock for ca certs: {Name:mk4795cc985d88cb969cdd6a9c35d3c72f02dfea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:17:14.436092  201907 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21974-10722/.minikube/ca.key
	I1126 20:17:14.436208  201907 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.key
	I1126 20:17:14.436226  201907 certs.go:257] generating profile certs ...
	I1126 20:17:14.436292  201907 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kubernetes-upgrade-225144/client.key
	I1126 20:17:14.436308  201907 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kubernetes-upgrade-225144/client.crt with IP's: []
	I1126 20:17:14.526512  201907 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kubernetes-upgrade-225144/client.crt ...
	I1126 20:17:14.526537  201907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kubernetes-upgrade-225144/client.crt: {Name:mk8ca0ed83be291ec3801953a1afb0810c00f08b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:17:14.526695  201907 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kubernetes-upgrade-225144/client.key ...
	I1126 20:17:14.526711  201907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kubernetes-upgrade-225144/client.key: {Name:mkaed1bf6ed48885d6fb38f3f0eee4801835e41e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:17:14.526821  201907 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kubernetes-upgrade-225144/apiserver.key.4f7b8804
	I1126 20:17:14.526838  201907 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kubernetes-upgrade-225144/apiserver.crt.4f7b8804 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1126 20:17:14.573320  201907 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kubernetes-upgrade-225144/apiserver.crt.4f7b8804 ...
	I1126 20:17:14.573343  201907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kubernetes-upgrade-225144/apiserver.crt.4f7b8804: {Name:mk31a74ff9d2aa51c58ad70994ca7a15a1607fe9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:17:14.573505  201907 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kubernetes-upgrade-225144/apiserver.key.4f7b8804 ...
	I1126 20:17:14.573523  201907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kubernetes-upgrade-225144/apiserver.key.4f7b8804: {Name:mk2df4e22a98e02fc26fa8ad159d779f5e628c8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:17:14.573629  201907 certs.go:382] copying /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kubernetes-upgrade-225144/apiserver.crt.4f7b8804 -> /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kubernetes-upgrade-225144/apiserver.crt
	I1126 20:17:14.573708  201907 certs.go:386] copying /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kubernetes-upgrade-225144/apiserver.key.4f7b8804 -> /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kubernetes-upgrade-225144/apiserver.key
	I1126 20:17:14.573767  201907 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kubernetes-upgrade-225144/proxy-client.key
	I1126 20:17:14.573782  201907 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kubernetes-upgrade-225144/proxy-client.crt with IP's: []
	I1126 20:17:14.334507  202639 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14258.pem /etc/ssl/certs/51391683.0"
	I1126 20:17:14.342268  202639 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1126 20:17:14.346156  202639 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1126 20:17:14.385351  202639 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1126 20:17:14.423257  202639 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1126 20:17:14.476107  202639 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1126 20:17:14.513302  202639 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1126 20:17:14.549057  202639 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1126 20:17:14.583839  202639 kubeadm.go:401] StartCluster: {Name:pause-088343 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-088343 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:17:14.583973  202639 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 20:17:14.584039  202639 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 20:17:14.612779  202639 cri.go:89] found id: "23efe5ed2d5129e2520b991840514eab0b95ecff061cdeed4dce4a50852925f9"
	I1126 20:17:14.612807  202639 cri.go:89] found id: "ded1aa6ae8ad4b032f860f18ec98c2321c1aecc1632c746bf85cfd4786777f49"
	I1126 20:17:14.612814  202639 cri.go:89] found id: "08f40eeb695e63781aee8a6dd64716e465781b63c96d5ecc21e5a1895bc0c4a0"
	I1126 20:17:14.612819  202639 cri.go:89] found id: "38113599f674298b9b3fea5b2a7c1e4c46276f1c818aa294ebd11f463d2b5429"
	I1126 20:17:14.612823  202639 cri.go:89] found id: "8fff4af24a8946fc7daabaa1caacec5d7deeaf3f92227b0f6671f78b34598cb6"
	I1126 20:17:14.612828  202639 cri.go:89] found id: "6ec60134d42facb0d523b0d825b2d62cae44913de8328d8a2ed7b3bcba96b481"
	I1126 20:17:14.612832  202639 cri.go:89] found id: "77ab3602e99c51dd1062dba3be62901ab816c07d84dd7894b76a2f95e108f6c0"
	I1126 20:17:14.612837  202639 cri.go:89] found id: ""
	I1126 20:17:14.612880  202639 ssh_runner.go:195] Run: sudo runc list -f json
	W1126 20:17:14.624296  202639 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:17:14Z" level=error msg="open /run/runc: no such file or directory"
	I1126 20:17:14.624359  202639 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1126 20:17:14.632215  202639 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1126 20:17:14.632233  202639 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1126 20:17:14.632275  202639 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1126 20:17:14.640289  202639 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1126 20:17:14.641192  202639 kubeconfig.go:125] found "pause-088343" server: "https://192.168.76.2:8443"
	I1126 20:17:14.642380  202639 kapi.go:59] client config for pause-088343: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21974-10722/.minikube/profiles/pause-088343/client.crt", KeyFile:"/home/jenkins/minikube-integration/21974-10722/.minikube/profiles/pause-088343/client.key", CAFile:"/home/jenkins/minikube-integration/21974-10722/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]stri
ng(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2815480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1126 20:17:14.642888  202639 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1126 20:17:14.642912  202639 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1126 20:17:14.642919  202639 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1126 20:17:14.642925  202639 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1126 20:17:14.642931  202639 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1126 20:17:14.643242  202639 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1126 20:17:14.650642  202639 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1126 20:17:14.650673  202639 kubeadm.go:602] duration metric: took 18.434547ms to restartPrimaryControlPlane
	I1126 20:17:14.650683  202639 kubeadm.go:403] duration metric: took 66.852446ms to StartCluster
	I1126 20:17:14.650695  202639 settings.go:142] acquiring lock: {Name:mkeab3f1dbcfb88a16fda32bff13c520a9a811ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:17:14.650745  202639 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21974-10722/kubeconfig
	I1126 20:17:14.651663  202639 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/kubeconfig: {Name:mk6fc897a3190afd853d8f239c80394df10dbd39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:17:14.651908  202639 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1126 20:17:14.651971  202639 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1126 20:17:14.652260  202639 config.go:182] Loaded profile config "pause-088343": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:17:14.653724  202639 out.go:179] * Enabled addons: 
	I1126 20:17:14.653730  202639 out.go:179] * Verifying Kubernetes components...
	I1126 20:17:14.649579  201907 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kubernetes-upgrade-225144/proxy-client.crt ...
	I1126 20:17:14.649602  201907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kubernetes-upgrade-225144/proxy-client.crt: {Name:mk9de2ab2e4b93572b6a95d7385771a05d0e808a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:17:14.649746  201907 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kubernetes-upgrade-225144/proxy-client.key ...
	I1126 20:17:14.649762  201907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kubernetes-upgrade-225144/proxy-client.key: {Name:mk54b8d481d43878c8f56b75e3a8a0b524eb6308 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:17:14.649974  201907 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/14258.pem (1338 bytes)
	W1126 20:17:14.650021  201907 certs.go:480] ignoring /home/jenkins/minikube-integration/21974-10722/.minikube/certs/14258_empty.pem, impossibly tiny 0 bytes
	I1126 20:17:14.650037  201907 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem (1679 bytes)
	I1126 20:17:14.650075  201907 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem (1078 bytes)
	I1126 20:17:14.650110  201907 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem (1123 bytes)
	I1126 20:17:14.650144  201907 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem (1675 bytes)
	I1126 20:17:14.650206  201907 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem (1708 bytes)
	I1126 20:17:14.650894  201907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1126 20:17:14.668940  201907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1126 20:17:14.691646  201907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1126 20:17:14.712596  201907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1126 20:17:14.729121  201907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kubernetes-upgrade-225144/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1126 20:17:14.745908  201907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kubernetes-upgrade-225144/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1126 20:17:14.762986  201907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kubernetes-upgrade-225144/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1126 20:17:14.780644  201907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kubernetes-upgrade-225144/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1126 20:17:14.799794  201907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem --> /usr/share/ca-certificates/142582.pem (1708 bytes)
	I1126 20:17:14.820180  201907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1126 20:17:14.839417  201907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/certs/14258.pem --> /usr/share/ca-certificates/14258.pem (1338 bytes)
	I1126 20:17:14.869917  201907 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1126 20:17:14.883764  201907 ssh_runner.go:195] Run: openssl version
	I1126 20:17:14.890407  201907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142582.pem && ln -fs /usr/share/ca-certificates/142582.pem /etc/ssl/certs/142582.pem"
	I1126 20:17:14.898761  201907 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142582.pem
	I1126 20:17:14.902206  201907 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 26 19:41 /usr/share/ca-certificates/142582.pem
	I1126 20:17:14.902261  201907 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142582.pem
	I1126 20:17:14.936659  201907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142582.pem /etc/ssl/certs/3ec20f2e.0"
	I1126 20:17:14.945848  201907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1126 20:17:14.954193  201907 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:17:14.958373  201907 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 26 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:17:14.958441  201907 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:17:14.997642  201907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1126 20:17:15.006634  201907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14258.pem && ln -fs /usr/share/ca-certificates/14258.pem /etc/ssl/certs/14258.pem"
	I1126 20:17:15.014797  201907 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14258.pem
	I1126 20:17:15.018274  201907 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 26 19:41 /usr/share/ca-certificates/14258.pem
	I1126 20:17:15.018330  201907 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14258.pem
	I1126 20:17:15.052504  201907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14258.pem /etc/ssl/certs/51391683.0"
	I1126 20:17:15.062263  201907 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1126 20:17:15.065773  201907 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1126 20:17:15.065828  201907 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-225144 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-225144 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:17:15.065913  201907 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 20:17:15.065965  201907 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 20:17:15.092209  201907 cri.go:89] found id: ""
	I1126 20:17:15.092282  201907 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1126 20:17:15.100292  201907 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1126 20:17:15.108041  201907 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1126 20:17:15.108102  201907 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1126 20:17:15.115297  201907 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1126 20:17:15.115319  201907 kubeadm.go:158] found existing configuration files:
	
	I1126 20:17:15.115363  201907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1126 20:17:15.123104  201907 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1126 20:17:15.123156  201907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1126 20:17:15.130094  201907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1126 20:17:15.137300  201907 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1126 20:17:15.137349  201907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1126 20:17:15.144353  201907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1126 20:17:15.151898  201907 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1126 20:17:15.151946  201907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1126 20:17:15.158840  201907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1126 20:17:15.166162  201907 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1126 20:17:15.166224  201907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1126 20:17:15.173684  201907 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1126 20:17:15.222885  201907 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1126 20:17:15.222993  201907 kubeadm.go:319] [preflight] Running pre-flight checks
	I1126 20:17:15.262120  201907 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1126 20:17:15.262216  201907 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1126 20:17:15.262296  201907 kubeadm.go:319] OS: Linux
	I1126 20:17:15.262373  201907 kubeadm.go:319] CGROUPS_CPU: enabled
	I1126 20:17:15.262443  201907 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1126 20:17:15.262541  201907 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1126 20:17:15.262619  201907 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1126 20:17:15.262693  201907 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1126 20:17:15.262769  201907 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1126 20:17:15.262842  201907 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1126 20:17:15.262911  201907 kubeadm.go:319] CGROUPS_IO: enabled
	I1126 20:17:15.329721  201907 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1126 20:17:15.329875  201907 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1126 20:17:15.330038  201907 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1126 20:17:15.483898  201907 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1126 20:17:14.657819  202639 addons.go:530] duration metric: took 5.852205ms for enable addons: enabled=[]
	I1126 20:17:14.657849  202639 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:17:14.768156  202639 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:17:14.782622  202639 node_ready.go:35] waiting up to 6m0s for node "pause-088343" to be "Ready" ...
	I1126 20:17:14.790398  202639 node_ready.go:49] node "pause-088343" is "Ready"
	I1126 20:17:14.790429  202639 node_ready.go:38] duration metric: took 7.779156ms for node "pause-088343" to be "Ready" ...
	I1126 20:17:14.790445  202639 api_server.go:52] waiting for apiserver process to appear ...
	I1126 20:17:14.790507  202639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:17:14.802026  202639 api_server.go:72] duration metric: took 150.084217ms to wait for apiserver process to appear ...
	I1126 20:17:14.802051  202639 api_server.go:88] waiting for apiserver healthz status ...
	I1126 20:17:14.802068  202639 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1126 20:17:14.806638  202639 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1126 20:17:14.807620  202639 api_server.go:141] control plane version: v1.34.1
	I1126 20:17:14.807644  202639 api_server.go:131] duration metric: took 5.586341ms to wait for apiserver health ...
	I1126 20:17:14.807652  202639 system_pods.go:43] waiting for kube-system pods to appear ...
	I1126 20:17:14.810680  202639 system_pods.go:59] 7 kube-system pods found
	I1126 20:17:14.810719  202639 system_pods.go:61] "coredns-66bc5c9577-npkd9" [4e96858c-42d7-4bb9-a5a9-252f2585bf9b] Running
	I1126 20:17:14.810734  202639 system_pods.go:61] "etcd-pause-088343" [5f9e8c9b-fb82-4c5c-a067-e4f9d8d58f0d] Running
	I1126 20:17:14.810740  202639 system_pods.go:61] "kindnet-s6tf4" [48867150-fa26-4bb5-91d9-91a5d5d2f6ee] Running
	I1126 20:17:14.810749  202639 system_pods.go:61] "kube-apiserver-pause-088343" [96890fe3-682e-44c9-87f6-5f5d9a409126] Running
	I1126 20:17:14.810755  202639 system_pods.go:61] "kube-controller-manager-pause-088343" [062bce23-d1a8-47d1-a42e-86331d362308] Running
	I1126 20:17:14.810761  202639 system_pods.go:61] "kube-proxy-j4rc4" [7036e338-edf1-43c2-b5b3-213e285bdd62] Running
	I1126 20:17:14.810766  202639 system_pods.go:61] "kube-scheduler-pause-088343" [ac9f62f1-364b-4832-bf3d-9c76acbb00bf] Running
	I1126 20:17:14.810777  202639 system_pods.go:74] duration metric: took 3.11854ms to wait for pod list to return data ...
	I1126 20:17:14.810785  202639 default_sa.go:34] waiting for default service account to be created ...
	I1126 20:17:14.812745  202639 default_sa.go:45] found service account: "default"
	I1126 20:17:14.812765  202639 default_sa.go:55] duration metric: took 1.97142ms for default service account to be created ...
	I1126 20:17:14.812775  202639 system_pods.go:116] waiting for k8s-apps to be running ...
	I1126 20:17:14.815432  202639 system_pods.go:86] 7 kube-system pods found
	I1126 20:17:14.815470  202639 system_pods.go:89] "coredns-66bc5c9577-npkd9" [4e96858c-42d7-4bb9-a5a9-252f2585bf9b] Running
	I1126 20:17:14.815480  202639 system_pods.go:89] "etcd-pause-088343" [5f9e8c9b-fb82-4c5c-a067-e4f9d8d58f0d] Running
	I1126 20:17:14.815486  202639 system_pods.go:89] "kindnet-s6tf4" [48867150-fa26-4bb5-91d9-91a5d5d2f6ee] Running
	I1126 20:17:14.815494  202639 system_pods.go:89] "kube-apiserver-pause-088343" [96890fe3-682e-44c9-87f6-5f5d9a409126] Running
	I1126 20:17:14.815500  202639 system_pods.go:89] "kube-controller-manager-pause-088343" [062bce23-d1a8-47d1-a42e-86331d362308] Running
	I1126 20:17:14.815506  202639 system_pods.go:89] "kube-proxy-j4rc4" [7036e338-edf1-43c2-b5b3-213e285bdd62] Running
	I1126 20:17:14.815511  202639 system_pods.go:89] "kube-scheduler-pause-088343" [ac9f62f1-364b-4832-bf3d-9c76acbb00bf] Running
	I1126 20:17:14.815518  202639 system_pods.go:126] duration metric: took 2.73742ms to wait for k8s-apps to be running ...
	I1126 20:17:14.815532  202639 system_svc.go:44] waiting for kubelet service to be running ....
	I1126 20:17:14.815576  202639 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:17:14.830082  202639 system_svc.go:56] duration metric: took 14.542615ms WaitForService to wait for kubelet
	I1126 20:17:14.830109  202639 kubeadm.go:587] duration metric: took 178.16907ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1126 20:17:14.830131  202639 node_conditions.go:102] verifying NodePressure condition ...
	I1126 20:17:14.832244  202639 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1126 20:17:14.832265  202639 node_conditions.go:123] node cpu capacity is 8
	I1126 20:17:14.832280  202639 node_conditions.go:105] duration metric: took 2.143451ms to run NodePressure ...
	I1126 20:17:14.832294  202639 start.go:242] waiting for startup goroutines ...
	I1126 20:17:14.832305  202639 start.go:247] waiting for cluster config update ...
	I1126 20:17:14.832318  202639 start.go:256] writing updated cluster config ...
	I1126 20:17:14.832633  202639 ssh_runner.go:195] Run: rm -f paused
	I1126 20:17:14.836186  202639 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 20:17:14.836940  202639 kapi.go:59] client config for pause-088343: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21974-10722/.minikube/profiles/pause-088343/client.crt", KeyFile:"/home/jenkins/minikube-integration/21974-10722/.minikube/profiles/pause-088343/client.key", CAFile:"/home/jenkins/minikube-integration/21974-10722/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]stri
ng(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2815480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1126 20:17:14.839728  202639 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-npkd9" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:17:14.847177  202639 pod_ready.go:94] pod "coredns-66bc5c9577-npkd9" is "Ready"
	I1126 20:17:14.847204  202639 pod_ready.go:86] duration metric: took 7.451897ms for pod "coredns-66bc5c9577-npkd9" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:17:14.852958  202639 pod_ready.go:83] waiting for pod "etcd-pause-088343" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:17:14.860276  202639 pod_ready.go:94] pod "etcd-pause-088343" is "Ready"
	I1126 20:17:14.860302  202639 pod_ready.go:86] duration metric: took 7.323261ms for pod "etcd-pause-088343" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:17:14.868231  202639 pod_ready.go:83] waiting for pod "kube-apiserver-pause-088343" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:17:14.872334  202639 pod_ready.go:94] pod "kube-apiserver-pause-088343" is "Ready"
	I1126 20:17:14.872358  202639 pod_ready.go:86] duration metric: took 4.103783ms for pod "kube-apiserver-pause-088343" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:17:14.874270  202639 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-088343" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:17:15.241380  202639 pod_ready.go:94] pod "kube-controller-manager-pause-088343" is "Ready"
	I1126 20:17:15.241410  202639 pod_ready.go:86] duration metric: took 367.123406ms for pod "kube-controller-manager-pause-088343" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:17:15.440642  202639 pod_ready.go:83] waiting for pod "kube-proxy-j4rc4" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:17:15.840897  202639 pod_ready.go:94] pod "kube-proxy-j4rc4" is "Ready"
	I1126 20:17:15.840922  202639 pod_ready.go:86] duration metric: took 400.257489ms for pod "kube-proxy-j4rc4" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:17:16.040420  202639 pod_ready.go:83] waiting for pod "kube-scheduler-pause-088343" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:17:16.439428  202639 pod_ready.go:94] pod "kube-scheduler-pause-088343" is "Ready"
	I1126 20:17:16.439452  202639 pod_ready.go:86] duration metric: took 399.006281ms for pod "kube-scheduler-pause-088343" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:17:16.439474  202639 pod_ready.go:40] duration metric: took 1.603245156s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 20:17:16.482690  202639 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1126 20:17:16.513906  202639 out.go:179] * Done! kubectl is now configured to use "pause-088343" cluster and "default" namespace by default
	I1126 20:17:12.493940  204288 out.go:235] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1126 20:17:12.494232  204288 start.go:159] libmachine.API.Create for "stopped-upgrade-211103" (driver="docker")
	I1126 20:17:12.494265  204288 client.go:168] LocalClient.Create starting
	I1126 20:17:12.494354  204288 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem
	I1126 20:17:12.494391  204288 main.go:141] libmachine: Decoding PEM data...
	I1126 20:17:12.494408  204288 main.go:141] libmachine: Parsing certificate...
	I1126 20:17:12.494495  204288 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem
	I1126 20:17:12.494522  204288 main.go:141] libmachine: Decoding PEM data...
	I1126 20:17:12.494533  204288 main.go:141] libmachine: Parsing certificate...
	I1126 20:17:12.495000  204288 cli_runner.go:164] Run: docker network inspect stopped-upgrade-211103 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1126 20:17:12.514669  204288 cli_runner.go:211] docker network inspect stopped-upgrade-211103 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1126 20:17:12.514750  204288 network_create.go:284] running [docker network inspect stopped-upgrade-211103] to gather additional debugging logs...
	I1126 20:17:12.514770  204288 cli_runner.go:164] Run: docker network inspect stopped-upgrade-211103
	W1126 20:17:12.533239  204288 cli_runner.go:211] docker network inspect stopped-upgrade-211103 returned with exit code 1
	I1126 20:17:12.533268  204288 network_create.go:287] error running [docker network inspect stopped-upgrade-211103]: docker network inspect stopped-upgrade-211103: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network stopped-upgrade-211103 not found
	I1126 20:17:12.533295  204288 network_create.go:289] output of [docker network inspect stopped-upgrade-211103]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network stopped-upgrade-211103 not found
	
	** /stderr **
	I1126 20:17:12.533470  204288 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1126 20:17:12.552550  204288 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9f2fbfec5d4b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:aa:65:0e:d3:0d:13} reservation:<nil>}
	I1126 20:17:12.553176  204288 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bbc6aaddce08 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3a:93:ea:3f:0d:7e} reservation:<nil>}
	I1126 20:17:12.553819  204288 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ecd673f5b2f2 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:de:d0:57:1a:06:91} reservation:<nil>}
	I1126 20:17:12.554411  204288 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-53ea54025484 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:1e:01:df:bb:7d:a4} reservation:<nil>}
	I1126 20:17:12.555252  204288 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e15870}
	I1126 20:17:12.555276  204288 network_create.go:124] attempt to create docker network stopped-upgrade-211103 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1126 20:17:12.555332  204288 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=stopped-upgrade-211103 stopped-upgrade-211103
	I1126 20:17:12.605801  204288 network_create.go:108] docker network stopped-upgrade-211103 192.168.85.0/24 created
	I1126 20:17:12.605847  204288 kic.go:121] calculated static IP "192.168.85.2" for the "stopped-upgrade-211103" container
	I1126 20:17:12.605924  204288 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1126 20:17:12.630700  204288 cli_runner.go:164] Run: docker volume create stopped-upgrade-211103 --label name.minikube.sigs.k8s.io=stopped-upgrade-211103 --label created_by.minikube.sigs.k8s.io=true
	I1126 20:17:12.647729  204288 oci.go:103] Successfully created a docker volume stopped-upgrade-211103
	I1126 20:17:12.647840  204288 cli_runner.go:164] Run: docker run --rm --name stopped-upgrade-211103-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=stopped-upgrade-211103 --entrypoint /usr/bin/test -v stopped-upgrade-211103:/var gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -d /var/lib
	I1126 20:17:13.584010  204288 oci.go:107] Successfully prepared a docker volume stopped-upgrade-211103
	I1126 20:17:13.584074  204288 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1126 20:17:13.584102  204288 kic.go:194] Starting extracting preloaded images to volume ...
	I1126 20:17:13.584180  204288 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21974-10722/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v stopped-upgrade-211103:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir
	I1126 20:17:15.489446  201907 out.go:252]   - Generating certificates and keys ...
	I1126 20:17:15.489556  201907 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1126 20:17:15.489655  201907 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1126 20:17:15.949935  201907 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1126 20:17:16.247637  201907 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1126 20:17:16.671156  201907 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1126 20:17:16.827309  201907 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1126 20:17:16.973201  201907 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1126 20:17:16.973383  201907 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-225144 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1126 20:17:17.120393  201907 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1126 20:17:17.121375  201907 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-225144 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1126 20:17:17.618562  201907 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1126 20:17:17.711260  201907 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1126 20:17:17.779416  201907 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1126 20:17:17.779533  201907 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1126 20:17:17.919024  201907 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1126 20:17:18.148942  201907 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1126 20:17:18.243823  201907 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1126 20:17:18.384814  201907 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1126 20:17:18.385539  201907 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1126 20:17:18.390873  201907 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1126 20:17:13.849678  205232 delete.go:124] DEMOLISHING missing-upgrade-521324 ...
	I1126 20:17:13.849758  205232 cli_runner.go:164] Run: docker container inspect missing-upgrade-521324 --format={{.State.Status}}
	W1126 20:17:13.869593  205232 cli_runner.go:211] docker container inspect missing-upgrade-521324 --format={{.State.Status}} returned with exit code 1
	W1126 20:17:13.869651  205232 stop.go:83] unable to get state: unknown state "missing-upgrade-521324": docker container inspect missing-upgrade-521324 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-521324
	I1126 20:17:13.869669  205232 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "missing-upgrade-521324": docker container inspect missing-upgrade-521324 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-521324
	I1126 20:17:13.870094  205232 cli_runner.go:164] Run: docker container inspect missing-upgrade-521324 --format={{.State.Status}}
	W1126 20:17:13.889643  205232 cli_runner.go:211] docker container inspect missing-upgrade-521324 --format={{.State.Status}} returned with exit code 1
	I1126 20:17:13.889717  205232 delete.go:82] Unable to get host status for missing-upgrade-521324, assuming it has already been deleted: state: unknown state "missing-upgrade-521324": docker container inspect missing-upgrade-521324 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-521324
	I1126 20:17:13.889780  205232 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-521324
	W1126 20:17:13.911626  205232 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-521324 returned with exit code 1
	I1126 20:17:13.911665  205232 kic.go:371] could not find the container missing-upgrade-521324 to remove it. will try anyways
	I1126 20:17:13.911710  205232 cli_runner.go:164] Run: docker container inspect missing-upgrade-521324 --format={{.State.Status}}
	W1126 20:17:13.934683  205232 cli_runner.go:211] docker container inspect missing-upgrade-521324 --format={{.State.Status}} returned with exit code 1
	W1126 20:17:13.934736  205232 oci.go:84] error getting container status, will try to delete anyways: unknown state "missing-upgrade-521324": docker container inspect missing-upgrade-521324 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-521324
	I1126 20:17:13.934787  205232 cli_runner.go:164] Run: docker exec --privileged -t missing-upgrade-521324 /bin/bash -c "sudo init 0"
	W1126 20:17:13.954899  205232 cli_runner.go:211] docker exec --privileged -t missing-upgrade-521324 /bin/bash -c "sudo init 0" returned with exit code 1
	I1126 20:17:13.954933  205232 oci.go:659] error shutdown missing-upgrade-521324: docker exec --privileged -t missing-upgrade-521324 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-521324
	I1126 20:17:14.955644  205232 cli_runner.go:164] Run: docker container inspect missing-upgrade-521324 --format={{.State.Status}}
	W1126 20:17:14.974447  205232 cli_runner.go:211] docker container inspect missing-upgrade-521324 --format={{.State.Status}} returned with exit code 1
	I1126 20:17:14.974519  205232 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-521324": docker container inspect missing-upgrade-521324 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-521324
	I1126 20:17:14.974532  205232 oci.go:673] temporary error: container missing-upgrade-521324 status is  but expect it to be exited
	I1126 20:17:14.974568  205232 retry.go:31] will retry after 255.124429ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-521324": docker container inspect missing-upgrade-521324 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-521324
	I1126 20:17:15.229853  205232 cli_runner.go:164] Run: docker container inspect missing-upgrade-521324 --format={{.State.Status}}
	W1126 20:17:15.248596  205232 cli_runner.go:211] docker container inspect missing-upgrade-521324 --format={{.State.Status}} returned with exit code 1
	I1126 20:17:15.248646  205232 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-521324": docker container inspect missing-upgrade-521324 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-521324
	I1126 20:17:15.248673  205232 oci.go:673] temporary error: container missing-upgrade-521324 status is  but expect it to be exited
	I1126 20:17:15.248705  205232 retry.go:31] will retry after 565.430892ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-521324": docker container inspect missing-upgrade-521324 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-521324
	I1126 20:17:15.814472  205232 cli_runner.go:164] Run: docker container inspect missing-upgrade-521324 --format={{.State.Status}}
	W1126 20:17:15.834014  205232 cli_runner.go:211] docker container inspect missing-upgrade-521324 --format={{.State.Status}} returned with exit code 1
	I1126 20:17:15.834094  205232 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-521324": docker container inspect missing-upgrade-521324 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-521324
	I1126 20:17:15.834109  205232 oci.go:673] temporary error: container missing-upgrade-521324 status is  but expect it to be exited
	I1126 20:17:15.834136  205232 retry.go:31] will retry after 944.306376ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-521324": docker container inspect missing-upgrade-521324 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-521324
	I1126 20:17:16.778621  205232 cli_runner.go:164] Run: docker container inspect missing-upgrade-521324 --format={{.State.Status}}
	W1126 20:17:16.796722  205232 cli_runner.go:211] docker container inspect missing-upgrade-521324 --format={{.State.Status}} returned with exit code 1
	I1126 20:17:16.796782  205232 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-521324": docker container inspect missing-upgrade-521324 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-521324
	I1126 20:17:16.796797  205232 oci.go:673] temporary error: container missing-upgrade-521324 status is  but expect it to be exited
	I1126 20:17:16.796831  205232 retry.go:31] will retry after 1.884179029s: couldn't verify container is exited. %v: unknown state "missing-upgrade-521324": docker container inspect missing-upgrade-521324 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-521324
	I1126 20:17:18.392323  201907 out.go:252]   - Booting up control plane ...
	I1126 20:17:18.392470  201907 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1126 20:17:18.393386  201907 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1126 20:17:18.394182  201907 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1126 20:17:18.408899  201907 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1126 20:17:18.409701  201907 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1126 20:17:18.409766  201907 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1126 20:17:18.522706  201907 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	
	
	==> CRI-O <==
	Nov 26 20:17:13 pause-088343 crio[2181]: time="2025-11-26T20:17:13.266234331Z" level=info msg="RDT not available in the host system"
	Nov 26 20:17:13 pause-088343 crio[2181]: time="2025-11-26T20:17:13.266252272Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 26 20:17:13 pause-088343 crio[2181]: time="2025-11-26T20:17:13.267099923Z" level=info msg="Conmon does support the --sync option"
	Nov 26 20:17:13 pause-088343 crio[2181]: time="2025-11-26T20:17:13.267120335Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 26 20:17:13 pause-088343 crio[2181]: time="2025-11-26T20:17:13.267138402Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 26 20:17:13 pause-088343 crio[2181]: time="2025-11-26T20:17:13.267847638Z" level=info msg="Conmon does support the --sync option"
	Nov 26 20:17:13 pause-088343 crio[2181]: time="2025-11-26T20:17:13.267863609Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 26 20:17:13 pause-088343 crio[2181]: time="2025-11-26T20:17:13.271731715Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 26 20:17:13 pause-088343 crio[2181]: time="2025-11-26T20:17:13.271756742Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 26 20:17:13 pause-088343 crio[2181]: time="2025-11-26T20:17:13.272312046Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/
cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"
/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Nov 26 20:17:13 pause-088343 crio[2181]: time="2025-11-26T20:17:13.272678862Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Nov 26 20:17:13 pause-088343 crio[2181]: time="2025-11-26T20:17:13.272722101Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Nov 26 20:17:13 pause-088343 crio[2181]: time="2025-11-26T20:17:13.358358438Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-npkd9 Namespace:kube-system ID:45137e979e355bffa133eac88dfd24d461e4290f2a744bf74ef0704290d171e8 UID:4e96858c-42d7-4bb9-a5a9-252f2585bf9b NetNS:/var/run/netns/3c411218-1f43-4fb6-a8ee-70e167d20a57 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008b0f0}] Aliases:map[]}"
	Nov 26 20:17:13 pause-088343 crio[2181]: time="2025-11-26T20:17:13.35863388Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-npkd9 for CNI network kindnet (type=ptp)"
	Nov 26 20:17:13 pause-088343 crio[2181]: time="2025-11-26T20:17:13.3591271Z" level=info msg="Registered SIGHUP reload watcher"
	Nov 26 20:17:13 pause-088343 crio[2181]: time="2025-11-26T20:17:13.359167873Z" level=info msg="Starting seccomp notifier watcher"
	Nov 26 20:17:13 pause-088343 crio[2181]: time="2025-11-26T20:17:13.359237706Z" level=info msg="Create NRI interface"
	Nov 26 20:17:13 pause-088343 crio[2181]: time="2025-11-26T20:17:13.359922865Z" level=info msg="built-in NRI default validator is disabled"
	Nov 26 20:17:13 pause-088343 crio[2181]: time="2025-11-26T20:17:13.359961017Z" level=info msg="runtime interface created"
	Nov 26 20:17:13 pause-088343 crio[2181]: time="2025-11-26T20:17:13.359976871Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Nov 26 20:17:13 pause-088343 crio[2181]: time="2025-11-26T20:17:13.359985371Z" level=info msg="runtime interface starting up..."
	Nov 26 20:17:13 pause-088343 crio[2181]: time="2025-11-26T20:17:13.359997553Z" level=info msg="starting plugins..."
	Nov 26 20:17:13 pause-088343 crio[2181]: time="2025-11-26T20:17:13.360014095Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Nov 26 20:17:13 pause-088343 crio[2181]: time="2025-11-26T20:17:13.36063018Z" level=info msg="No systemd watchdog enabled"
	Nov 26 20:17:13 pause-088343 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	23efe5ed2d512       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   15 seconds ago       Running             coredns                   0                   45137e979e355       coredns-66bc5c9577-npkd9               kube-system
	ded1aa6ae8ad4       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   57 seconds ago       Running             kube-proxy                0                   ea48e63076da1       kube-proxy-j4rc4                       kube-system
	08f40eeb695e6       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   57 seconds ago       Running             kindnet-cni               0                   309d7385e9ca1       kindnet-s6tf4                          kube-system
	38113599f6742       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   About a minute ago   Running             etcd                      0                   59dc44ee95346       etcd-pause-088343                      kube-system
	8fff4af24a894       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   About a minute ago   Running             kube-controller-manager   0                   a4bd719fef483       kube-controller-manager-pause-088343   kube-system
	6ec60134d42fa       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   About a minute ago   Running             kube-scheduler            0                   43e91e1d13dd2       kube-scheduler-pause-088343            kube-system
	77ab3602e99c5       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   About a minute ago   Running             kube-apiserver            0                   b293cedf7b086       kube-apiserver-pause-088343            kube-system
	
	
	==> coredns [23efe5ed2d5129e2520b991840514eab0b95ecff061cdeed4dce4a50852925f9] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51119 - 16281 "HINFO IN 5625130158383906075.1338709804454349835. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.124283077s
	
	
	==> describe nodes <==
	Name:               pause-088343
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-088343
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970
	                    minikube.k8s.io/name=pause-088343
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_26T20_16_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 26 Nov 2025 20:16:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-088343
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 26 Nov 2025 20:17:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 26 Nov 2025 20:17:05 +0000   Wed, 26 Nov 2025 20:16:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 26 Nov 2025 20:17:05 +0000   Wed, 26 Nov 2025 20:16:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 26 Nov 2025 20:17:05 +0000   Wed, 26 Nov 2025 20:16:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 26 Nov 2025 20:17:05 +0000   Wed, 26 Nov 2025 20:17:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-088343
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                5f3e1164-b5cf-4da8-831d-d2b903341fe7
	  Boot ID:                    4ab83601-3d95-4490-96bc-14b416ba2714
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-npkd9                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     58s
	  kube-system                 etcd-pause-088343                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         63s
	  kube-system                 kindnet-s6tf4                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      58s
	  kube-system                 kube-apiserver-pause-088343             250m (3%)     0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 kube-controller-manager-pause-088343    200m (2%)     0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 kube-proxy-j4rc4                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	  kube-system                 kube-scheduler-pause-088343             100m (1%)     0 (0%)      0 (0%)           0 (0%)         63s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 57s   kube-proxy       
	  Normal  Starting                 64s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  63s   kubelet          Node pause-088343 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    63s   kubelet          Node pause-088343 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     63s   kubelet          Node pause-088343 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           59s   node-controller  Node pause-088343 event: Registered Node pause-088343 in Controller
	  Normal  NodeReady                17s   kubelet          Node pause-088343 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.091832] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023778] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.010162] kauditd_printk_skb: 47 callbacks suppressed
	[Nov26 19:37] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.011177] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.022896] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.024869] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.022915] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.023864] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +2.047793] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +4.032568] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +8.126198] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[ +16.382331] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[Nov26 19:38] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	
	
	==> etcd [38113599f674298b9b3fea5b2a7c1e4c46276f1c818aa294ebd11f463d2b5429] <==
	{"level":"warn","ts":"2025-11-26T20:16:15.348910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:16:15.350016Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:16:15.361307Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:16:15.368153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:16:15.389745Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:16:15.410717Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:16:15.419614Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:16:15.429061Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:16:15.444036Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:16:15.455842Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:16:15.481759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:16:15.507799Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:16:15.523502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:16:15.540878Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:16:15.550776Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:16:15.570702Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:16:15.665582Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38430","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-26T20:16:38.136799Z","caller":"traceutil/trace.go:172","msg":"trace[1534011608] transaction","detail":"{read_only:false; response_revision:388; number_of_response:1; }","duration":"127.927811ms","start":"2025-11-26T20:16:38.008850Z","end":"2025-11-26T20:16:38.136777Z","steps":["trace[1534011608] 'process raft request'  (duration: 64.33728ms)","trace[1534011608] 'compare'  (duration: 63.48434ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-26T20:16:47.460000Z","caller":"traceutil/trace.go:172","msg":"trace[554231108] transaction","detail":"{read_only:false; response_revision:390; number_of_response:1; }","duration":"198.993369ms","start":"2025-11-26T20:16:47.260989Z","end":"2025-11-26T20:16:47.459983Z","steps":["trace[554231108] 'process raft request'  (duration: 198.866517ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-26T20:17:07.957246Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"100.271189ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-26T20:17:07.957325Z","caller":"traceutil/trace.go:172","msg":"trace[1012265623] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:414; }","duration":"100.36965ms","start":"2025-11-26T20:17:07.856938Z","end":"2025-11-26T20:17:07.957307Z","steps":["trace[1012265623] 'range keys from in-memory index tree'  (duration: 100.212202ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-26T20:17:10.107996Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"148.407726ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-26T20:17:10.108308Z","caller":"traceutil/trace.go:172","msg":"trace[1754885902] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:416; }","duration":"148.722482ms","start":"2025-11-26T20:17:09.959570Z","end":"2025-11-26T20:17:10.108292Z","steps":["trace[1754885902] 'range keys from in-memory index tree'  (duration: 148.348531ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-26T20:17:10.107997Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"150.701975ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-26T20:17:10.108509Z","caller":"traceutil/trace.go:172","msg":"trace[1721702882] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:416; }","duration":"151.245582ms","start":"2025-11-26T20:17:09.957243Z","end":"2025-11-26T20:17:10.108489Z","steps":["trace[1721702882] 'range keys from in-memory index tree'  (duration: 150.685559ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:17:22 up 59 min,  0 user,  load average: 5.54, 2.42, 1.44
	Linux pause-088343 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [08f40eeb695e63781aee8a6dd64716e465781b63c96d5ecc21e5a1895bc0c4a0] <==
	I1126 20:16:24.931004       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1126 20:16:24.931305       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1126 20:16:24.932945       1 main.go:148] setting mtu 1500 for CNI 
	I1126 20:16:24.933150       1 main.go:178] kindnetd IP family: "ipv4"
	I1126 20:16:24.933207       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-26T20:16:25Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1126 20:16:25.307565       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1126 20:16:25.307637       1 controller.go:381] "Waiting for informer caches to sync"
	I1126 20:16:25.307671       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1126 20:16:25.307823       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1126 20:16:55.232382       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1126 20:16:55.232380       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1126 20:16:55.232382       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1126 20:16:55.232539       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1126 20:16:56.508301       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1126 20:16:56.508330       1 metrics.go:72] Registering metrics
	I1126 20:16:56.508402       1 controller.go:711] "Syncing nftables rules"
	I1126 20:17:05.231091       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1126 20:17:05.231131       1 main.go:301] handling current node
	I1126 20:17:15.230730       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1126 20:17:15.230767       1 main.go:301] handling current node
	
	
	==> kube-apiserver [77ab3602e99c51dd1062dba3be62901ab816c07d84dd7894b76a2f95e108f6c0] <==
	I1126 20:16:16.431754       1 policy_source.go:240] refreshing policies
	E1126 20:16:16.433763       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1126 20:16:16.481444       1 controller.go:667] quota admission added evaluator for: namespaces
	I1126 20:16:16.483941       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1126 20:16:16.485429       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1126 20:16:16.490916       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1126 20:16:16.491398       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1126 20:16:16.613245       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1126 20:16:17.278214       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1126 20:16:17.282160       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1126 20:16:17.282178       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1126 20:16:17.798308       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1126 20:16:17.840923       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1126 20:16:17.986507       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1126 20:16:17.995103       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1126 20:16:17.996119       1 controller.go:667] quota admission added evaluator for: endpoints
	I1126 20:16:18.000575       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1126 20:16:18.342855       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1126 20:16:18.967872       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1126 20:16:18.978026       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1126 20:16:18.988413       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1126 20:16:23.743219       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1126 20:16:24.144672       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1126 20:16:24.530416       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1126 20:16:24.541962       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [8fff4af24a8946fc7daabaa1caacec5d7deeaf3f92227b0f6671f78b34598cb6] <==
	I1126 20:16:23.333341       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1126 20:16:23.338205       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1126 20:16:23.339444       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1126 20:16:23.339753       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1126 20:16:23.339814       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1126 20:16:23.339903       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1126 20:16:23.339949       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1126 20:16:23.340030       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1126 20:16:23.340071       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1126 20:16:23.340107       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1126 20:16:23.340173       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1126 20:16:23.340228       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1126 20:16:23.341298       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1126 20:16:23.342359       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1126 20:16:23.342798       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1126 20:16:23.342820       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1126 20:16:23.342827       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1126 20:16:23.343939       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1126 20:16:23.344399       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1126 20:16:23.345642       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1126 20:16:23.345711       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1126 20:16:23.348994       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1126 20:16:23.353254       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1126 20:16:23.357560       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1126 20:17:08.285081       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [ded1aa6ae8ad4b032f860f18ec98c2321c1aecc1632c746bf85cfd4786777f49] <==
	I1126 20:16:24.771569       1 server_linux.go:53] "Using iptables proxy"
	I1126 20:16:24.876203       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1126 20:16:24.976649       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1126 20:16:24.977095       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1126 20:16:24.977215       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1126 20:16:25.021078       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1126 20:16:25.021144       1 server_linux.go:132] "Using iptables Proxier"
	I1126 20:16:25.028843       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1126 20:16:25.029605       1 server.go:527] "Version info" version="v1.34.1"
	I1126 20:16:25.029927       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 20:16:25.032415       1 config.go:200] "Starting service config controller"
	I1126 20:16:25.033776       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1126 20:16:25.033048       1 config.go:403] "Starting serviceCIDR config controller"
	I1126 20:16:25.033929       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1126 20:16:25.033277       1 config.go:309] "Starting node config controller"
	I1126 20:16:25.034019       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1126 20:16:25.034140       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1126 20:16:25.033020       1 config.go:106] "Starting endpoint slice config controller"
	I1126 20:16:25.035206       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1126 20:16:25.134303       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1126 20:16:25.134312       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1126 20:16:25.135409       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [6ec60134d42facb0d523b0d825b2d62cae44913de8328d8a2ed7b3bcba96b481] <==
	E1126 20:16:16.429553       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1126 20:16:16.429567       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1126 20:16:16.429604       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1126 20:16:16.429644       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1126 20:16:16.429660       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1126 20:16:16.429735       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1126 20:16:16.429814       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1126 20:16:16.430449       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1126 20:16:16.430502       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1126 20:16:16.430567       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1126 20:16:16.430597       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1126 20:16:16.430595       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1126 20:16:16.430744       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1126 20:16:17.245485       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1126 20:16:17.345644       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1126 20:16:17.350675       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1126 20:16:17.383897       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1126 20:16:17.430878       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1126 20:16:17.433852       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1126 20:16:17.459931       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1126 20:16:17.479918       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1126 20:16:17.490939       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1126 20:16:17.518977       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1126 20:16:17.551351       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I1126 20:16:19.327046       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 26 20:16:19 pause-088343 kubelet[1291]: I1126 20:16:19.935057    1291 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-088343" podStartSLOduration=0.935036237 podStartE2EDuration="935.036237ms" podCreationTimestamp="2025-11-26 20:16:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 20:16:19.916656529 +0000 UTC m=+1.146136834" watchObservedRunningTime="2025-11-26 20:16:19.935036237 +0000 UTC m=+1.164516535"
	Nov 26 20:16:19 pause-088343 kubelet[1291]: I1126 20:16:19.956364    1291 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-088343" podStartSLOduration=0.956322559 podStartE2EDuration="956.322559ms" podCreationTimestamp="2025-11-26 20:16:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 20:16:19.935370331 +0000 UTC m=+1.164850633" watchObservedRunningTime="2025-11-26 20:16:19.956322559 +0000 UTC m=+1.185802857"
	Nov 26 20:16:19 pause-088343 kubelet[1291]: I1126 20:16:19.964446    1291 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 26 20:16:19 pause-088343 kubelet[1291]: I1126 20:16:19.980840    1291 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-088343" podStartSLOduration=0.980820585 podStartE2EDuration="980.820585ms" podCreationTimestamp="2025-11-26 20:16:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 20:16:19.959204851 +0000 UTC m=+1.188685155" watchObservedRunningTime="2025-11-26 20:16:19.980820585 +0000 UTC m=+1.210300885"
	Nov 26 20:16:23 pause-088343 kubelet[1291]: I1126 20:16:23.357913    1291 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 26 20:16:23 pause-088343 kubelet[1291]: I1126 20:16:23.358699    1291 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 26 20:16:24 pause-088343 kubelet[1291]: I1126 20:16:24.198182    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/48867150-fa26-4bb5-91d9-91a5d5d2f6ee-lib-modules\") pod \"kindnet-s6tf4\" (UID: \"48867150-fa26-4bb5-91d9-91a5d5d2f6ee\") " pod="kube-system/kindnet-s6tf4"
	Nov 26 20:16:24 pause-088343 kubelet[1291]: I1126 20:16:24.198226    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7036e338-edf1-43c2-b5b3-213e285bdd62-xtables-lock\") pod \"kube-proxy-j4rc4\" (UID: \"7036e338-edf1-43c2-b5b3-213e285bdd62\") " pod="kube-system/kube-proxy-j4rc4"
	Nov 26 20:16:24 pause-088343 kubelet[1291]: I1126 20:16:24.198251    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gncrd\" (UniqueName: \"kubernetes.io/projected/48867150-fa26-4bb5-91d9-91a5d5d2f6ee-kube-api-access-gncrd\") pod \"kindnet-s6tf4\" (UID: \"48867150-fa26-4bb5-91d9-91a5d5d2f6ee\") " pod="kube-system/kindnet-s6tf4"
	Nov 26 20:16:24 pause-088343 kubelet[1291]: I1126 20:16:24.198277    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7036e338-edf1-43c2-b5b3-213e285bdd62-kube-proxy\") pod \"kube-proxy-j4rc4\" (UID: \"7036e338-edf1-43c2-b5b3-213e285bdd62\") " pod="kube-system/kube-proxy-j4rc4"
	Nov 26 20:16:24 pause-088343 kubelet[1291]: I1126 20:16:24.198296    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7036e338-edf1-43c2-b5b3-213e285bdd62-lib-modules\") pod \"kube-proxy-j4rc4\" (UID: \"7036e338-edf1-43c2-b5b3-213e285bdd62\") " pod="kube-system/kube-proxy-j4rc4"
	Nov 26 20:16:24 pause-088343 kubelet[1291]: I1126 20:16:24.198315    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fml7q\" (UniqueName: \"kubernetes.io/projected/7036e338-edf1-43c2-b5b3-213e285bdd62-kube-api-access-fml7q\") pod \"kube-proxy-j4rc4\" (UID: \"7036e338-edf1-43c2-b5b3-213e285bdd62\") " pod="kube-system/kube-proxy-j4rc4"
	Nov 26 20:16:24 pause-088343 kubelet[1291]: I1126 20:16:24.198348    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/48867150-fa26-4bb5-91d9-91a5d5d2f6ee-cni-cfg\") pod \"kindnet-s6tf4\" (UID: \"48867150-fa26-4bb5-91d9-91a5d5d2f6ee\") " pod="kube-system/kindnet-s6tf4"
	Nov 26 20:16:24 pause-088343 kubelet[1291]: I1126 20:16:24.198404    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/48867150-fa26-4bb5-91d9-91a5d5d2f6ee-xtables-lock\") pod \"kindnet-s6tf4\" (UID: \"48867150-fa26-4bb5-91d9-91a5d5d2f6ee\") " pod="kube-system/kindnet-s6tf4"
	Nov 26 20:16:24 pause-088343 kubelet[1291]: I1126 20:16:24.964112    1291 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-j4rc4" podStartSLOduration=0.96408687 podStartE2EDuration="964.08687ms" podCreationTimestamp="2025-11-26 20:16:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 20:16:24.962221665 +0000 UTC m=+6.191701966" watchObservedRunningTime="2025-11-26 20:16:24.96408687 +0000 UTC m=+6.193567172"
	Nov 26 20:16:24 pause-088343 kubelet[1291]: I1126 20:16:24.979437    1291 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-s6tf4" podStartSLOduration=0.979417555 podStartE2EDuration="979.417555ms" podCreationTimestamp="2025-11-26 20:16:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 20:16:24.978922537 +0000 UTC m=+6.208402839" watchObservedRunningTime="2025-11-26 20:16:24.979417555 +0000 UTC m=+6.208897855"
	Nov 26 20:17:05 pause-088343 kubelet[1291]: I1126 20:17:05.758429    1291 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 26 20:17:05 pause-088343 kubelet[1291]: I1126 20:17:05.809053    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e96858c-42d7-4bb9-a5a9-252f2585bf9b-config-volume\") pod \"coredns-66bc5c9577-npkd9\" (UID: \"4e96858c-42d7-4bb9-a5a9-252f2585bf9b\") " pod="kube-system/coredns-66bc5c9577-npkd9"
	Nov 26 20:17:05 pause-088343 kubelet[1291]: I1126 20:17:05.809097    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9cf8\" (UniqueName: \"kubernetes.io/projected/4e96858c-42d7-4bb9-a5a9-252f2585bf9b-kube-api-access-s9cf8\") pod \"coredns-66bc5c9577-npkd9\" (UID: \"4e96858c-42d7-4bb9-a5a9-252f2585bf9b\") " pod="kube-system/coredns-66bc5c9577-npkd9"
	Nov 26 20:17:07 pause-088343 kubelet[1291]: I1126 20:17:07.060978    1291 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-npkd9" podStartSLOduration=43.060957689 podStartE2EDuration="43.060957689s" podCreationTimestamp="2025-11-26 20:16:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 20:17:07.048604501 +0000 UTC m=+48.278084804" watchObservedRunningTime="2025-11-26 20:17:07.060957689 +0000 UTC m=+48.290437990"
	Nov 26 20:17:17 pause-088343 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 26 20:17:17 pause-088343 kubelet[1291]: I1126 20:17:17.162565    1291 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 26 20:17:17 pause-088343 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 26 20:17:17 pause-088343 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 26 20:17:17 pause-088343 systemd[1]: kubelet.service: Consumed 2.238s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-088343 -n pause-088343
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-088343 -n pause-088343: exit status 2 (363.844406ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-088343 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (6.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-157431 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-157431 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (251.578717ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:20:08Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-157431 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-157431 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-157431 describe deploy/metrics-server -n kube-system: exit status 1 (63.074639ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-157431 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-157431
helpers_test.go:243: (dbg) docker inspect old-k8s-version-157431:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "77bb37b66fd7027025a22d6f5aea0b07be6fffaf6fe8b99efa3c3d7655886caf",
	        "Created": "2025-11-26T20:19:16.110022495Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 237692,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-26T20:19:16.137890205Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/77bb37b66fd7027025a22d6f5aea0b07be6fffaf6fe8b99efa3c3d7655886caf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/77bb37b66fd7027025a22d6f5aea0b07be6fffaf6fe8b99efa3c3d7655886caf/hostname",
	        "HostsPath": "/var/lib/docker/containers/77bb37b66fd7027025a22d6f5aea0b07be6fffaf6fe8b99efa3c3d7655886caf/hosts",
	        "LogPath": "/var/lib/docker/containers/77bb37b66fd7027025a22d6f5aea0b07be6fffaf6fe8b99efa3c3d7655886caf/77bb37b66fd7027025a22d6f5aea0b07be6fffaf6fe8b99efa3c3d7655886caf-json.log",
	        "Name": "/old-k8s-version-157431",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-157431:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-157431",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "77bb37b66fd7027025a22d6f5aea0b07be6fffaf6fe8b99efa3c3d7655886caf",
	                "LowerDir": "/var/lib/docker/overlay2/3c973fc8b80ac671221ba557ff7dfd317e56a412bc6c5655554bd65755b08efc-init/diff:/var/lib/docker/overlay2/1661d775281cb7314a654558ee2c4e5880ffafb3a27a085032449cd60a68753a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3c973fc8b80ac671221ba557ff7dfd317e56a412bc6c5655554bd65755b08efc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3c973fc8b80ac671221ba557ff7dfd317e56a412bc6c5655554bd65755b08efc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3c973fc8b80ac671221ba557ff7dfd317e56a412bc6c5655554bd65755b08efc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-157431",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-157431/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-157431",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-157431",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-157431",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "aa723e2308722f3cfd93d1bdeea9369a4060a5bd1229ad3551902e1414df45fd",
	            "SandboxKey": "/var/run/docker/netns/aa723e230872",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33053"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33054"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33057"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33055"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33056"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-157431": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5d4f1dd69a726aa0138274371b25ff8174904f4f402419e4752de500c743a887",
	                    "EndpointID": "9b0b76ef82a0d0697035f59b3a8e79687f90f53c87bce5698d9451d88d11f22f",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "5e:11:a1:ce:dd:2f",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-157431",
	                        "77bb37b66fd7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-157431 -n old-k8s-version-157431
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-157431 logs -n 25
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cilium-825702 sudo systemctl status docker --all --full --no-pager                                                                                                                                                                         │ cilium-825702          │ jenkins │ v1.37.0 │ 26 Nov 25 20:18 UTC │                     │
	│ ssh     │ -p cilium-825702 sudo systemctl cat docker --no-pager                                                                                                                                                                                         │ cilium-825702          │ jenkins │ v1.37.0 │ 26 Nov 25 20:18 UTC │                     │
	│ ssh     │ -p cilium-825702 sudo cat /etc/docker/daemon.json                                                                                                                                                                                             │ cilium-825702          │ jenkins │ v1.37.0 │ 26 Nov 25 20:18 UTC │                     │
	│ ssh     │ -p cilium-825702 sudo docker system info                                                                                                                                                                                                      │ cilium-825702          │ jenkins │ v1.37.0 │ 26 Nov 25 20:18 UTC │                     │
	│ ssh     │ -p cilium-825702 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                     │ cilium-825702          │ jenkins │ v1.37.0 │ 26 Nov 25 20:18 UTC │                     │
	│ ssh     │ -p cilium-825702 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ cilium-825702          │ jenkins │ v1.37.0 │ 26 Nov 25 20:18 UTC │                     │
	│ ssh     │ -p cilium-825702 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-825702          │ jenkins │ v1.37.0 │ 26 Nov 25 20:18 UTC │                     │
	│ ssh     │ -p cilium-825702 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-825702          │ jenkins │ v1.37.0 │ 26 Nov 25 20:18 UTC │                     │
	│ ssh     │ -p cilium-825702 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-825702          │ jenkins │ v1.37.0 │ 26 Nov 25 20:18 UTC │                     │
	│ ssh     │ -p cilium-825702 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-825702          │ jenkins │ v1.37.0 │ 26 Nov 25 20:18 UTC │                     │
	│ ssh     │ -p cilium-825702 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-825702          │ jenkins │ v1.37.0 │ 26 Nov 25 20:18 UTC │                     │
	│ ssh     │ -p cilium-825702 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-825702          │ jenkins │ v1.37.0 │ 26 Nov 25 20:18 UTC │                     │
	│ ssh     │ -p cilium-825702 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-825702          │ jenkins │ v1.37.0 │ 26 Nov 25 20:18 UTC │                     │
	│ ssh     │ -p cilium-825702 sudo containerd config dump                                                                                                                                                                                                  │ cilium-825702          │ jenkins │ v1.37.0 │ 26 Nov 25 20:18 UTC │                     │
	│ ssh     │ -p cilium-825702 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-825702          │ jenkins │ v1.37.0 │ 26 Nov 25 20:18 UTC │                     │
	│ ssh     │ -p cilium-825702 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-825702          │ jenkins │ v1.37.0 │ 26 Nov 25 20:18 UTC │                     │
	│ ssh     │ -p cilium-825702 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-825702          │ jenkins │ v1.37.0 │ 26 Nov 25 20:18 UTC │                     │
	│ ssh     │ -p cilium-825702 sudo crio config                                                                                                                                                                                                             │ cilium-825702          │ jenkins │ v1.37.0 │ 26 Nov 25 20:18 UTC │                     │
	│ delete  │ -p cilium-825702                                                                                                                                                                                                                              │ cilium-825702          │ jenkins │ v1.37.0 │ 26 Nov 25 20:18 UTC │ 26 Nov 25 20:18 UTC │
	│ start   │ -p cert-options-706331 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-706331    │ jenkins │ v1.37.0 │ 26 Nov 25 20:18 UTC │ 26 Nov 25 20:19 UTC │
	│ ssh     │ cert-options-706331 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-706331    │ jenkins │ v1.37.0 │ 26 Nov 25 20:19 UTC │ 26 Nov 25 20:19 UTC │
	│ ssh     │ -p cert-options-706331 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-706331    │ jenkins │ v1.37.0 │ 26 Nov 25 20:19 UTC │ 26 Nov 25 20:19 UTC │
	│ delete  │ -p cert-options-706331                                                                                                                                                                                                                        │ cert-options-706331    │ jenkins │ v1.37.0 │ 26 Nov 25 20:19 UTC │ 26 Nov 25 20:19 UTC │
	│ start   │ -p old-k8s-version-157431 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-157431 │ jenkins │ v1.37.0 │ 26 Nov 25 20:19 UTC │ 26 Nov 25 20:19 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-157431 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-157431 │ jenkins │ v1.37.0 │ 26 Nov 25 20:20 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/26 20:19:10
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1126 20:19:10.402682  236328 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:19:10.402909  236328 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:19:10.402916  236328 out.go:374] Setting ErrFile to fd 2...
	I1126 20:19:10.402921  236328 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:19:10.403139  236328 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-10722/.minikube/bin
	I1126 20:19:10.403588  236328 out.go:368] Setting JSON to false
	I1126 20:19:10.404659  236328 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3700,"bootTime":1764184650,"procs":380,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1126 20:19:10.404711  236328 start.go:143] virtualization: kvm guest
	I1126 20:19:10.406714  236328 out.go:179] * [old-k8s-version-157431] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1126 20:19:10.408090  236328 out.go:179]   - MINIKUBE_LOCATION=21974
	I1126 20:19:10.408124  236328 notify.go:221] Checking for updates...
	I1126 20:19:10.410976  236328 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1126 20:19:10.412076  236328 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21974-10722/kubeconfig
	I1126 20:19:10.413086  236328 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-10722/.minikube
	I1126 20:19:10.414170  236328 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1126 20:19:10.415204  236328 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1126 20:19:10.416581  236328 config.go:182] Loaded profile config "cert-expiration-571738": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:19:10.416669  236328 config.go:182] Loaded profile config "kubernetes-upgrade-225144": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:19:10.416752  236328 config.go:182] Loaded profile config "stopped-upgrade-211103": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1126 20:19:10.416812  236328 driver.go:422] Setting default libvirt URI to qemu:///system
	I1126 20:19:10.440068  236328 docker.go:124] docker version: linux-29.0.4:Docker Engine - Community
	I1126 20:19:10.440199  236328 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:19:10.498585  236328 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-26 20:19:10.488471536 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1126 20:19:10.498719  236328 docker.go:319] overlay module found
	I1126 20:19:10.500515  236328 out.go:179] * Using the docker driver based on user configuration
	I1126 20:19:10.501585  236328 start.go:309] selected driver: docker
	I1126 20:19:10.501598  236328 start.go:927] validating driver "docker" against <nil>
	I1126 20:19:10.501609  236328 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1126 20:19:10.502302  236328 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:19:10.557299  236328 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-26 20:19:10.546677414 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1126 20:19:10.557431  236328 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1126 20:19:10.557683  236328 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1126 20:19:10.559287  236328 out.go:179] * Using Docker driver with root privileges
	I1126 20:19:10.560363  236328 cni.go:84] Creating CNI manager for ""
	I1126 20:19:10.560425  236328 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:19:10.560436  236328 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1126 20:19:10.560498  236328 start.go:353] cluster config:
	{Name:old-k8s-version-157431 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-157431 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:19:10.561588  236328 out.go:179] * Starting "old-k8s-version-157431" primary control-plane node in "old-k8s-version-157431" cluster
	I1126 20:19:10.562563  236328 cache.go:134] Beginning downloading kic base image for docker with crio
	I1126 20:19:10.563541  236328 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1126 20:19:10.564519  236328 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1126 20:19:10.564543  236328 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21974-10722/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1126 20:19:10.564549  236328 cache.go:65] Caching tarball of preloaded images
	I1126 20:19:10.564551  236328 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1126 20:19:10.564613  236328 preload.go:238] Found /home/jenkins/minikube-integration/21974-10722/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1126 20:19:10.564624  236328 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1126 20:19:10.564711  236328 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/old-k8s-version-157431/config.json ...
	I1126 20:19:10.564734  236328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/old-k8s-version-157431/config.json: {Name:mk9abc138078f6022f2a54416423ba78bf93298f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:19:10.583682  236328 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1126 20:19:10.583697  236328 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1126 20:19:10.583713  236328 cache.go:243] Successfully downloaded all kic artifacts
	I1126 20:19:10.583748  236328 start.go:360] acquireMachinesLock for old-k8s-version-157431: {Name:mkea810daa6c92d5318c72561874a0f25d5c921b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1126 20:19:10.583845  236328 start.go:364] duration metric: took 79.002µs to acquireMachinesLock for "old-k8s-version-157431"
	I1126 20:19:10.583873  236328 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-157431 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-157431 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1126 20:19:10.583961  236328 start.go:125] createHost starting for "" (driver="docker")
	I1126 20:19:07.999509  211567 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1126 20:19:08.982170  211567 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": read tcp 192.168.103.1:52664->192.168.103.2:8443: read: connection reset by peer
	I1126 20:19:08.982239  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:19:08.982300  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:19:09.009797  211567 cri.go:89] found id: "6cf1c8fac8af6cb3b54d449362e57e6c614df1ac99f2ba2252204f5ea4fdffd7"
	I1126 20:19:09.009828  211567 cri.go:89] found id: "523b247ab54f1039c8bcaf648b82d25fb8007790982a54b6d1fc38499748378b"
	I1126 20:19:09.009834  211567 cri.go:89] found id: ""
	I1126 20:19:09.009842  211567 logs.go:282] 2 containers: [6cf1c8fac8af6cb3b54d449362e57e6c614df1ac99f2ba2252204f5ea4fdffd7 523b247ab54f1039c8bcaf648b82d25fb8007790982a54b6d1fc38499748378b]
	I1126 20:19:09.009893  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:19:09.013508  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:19:09.017172  211567 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:19:09.017232  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:19:09.041784  211567 cri.go:89] found id: ""
	I1126 20:19:09.041803  211567 logs.go:282] 0 containers: []
	W1126 20:19:09.041812  211567 logs.go:284] No container was found matching "etcd"
	I1126 20:19:09.041819  211567 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:19:09.041858  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:19:09.068702  211567 cri.go:89] found id: ""
	I1126 20:19:09.068722  211567 logs.go:282] 0 containers: []
	W1126 20:19:09.068729  211567 logs.go:284] No container was found matching "coredns"
	I1126 20:19:09.068735  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:19:09.068781  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:19:09.094226  211567 cri.go:89] found id: "b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3"
	I1126 20:19:09.094247  211567 cri.go:89] found id: ""
	I1126 20:19:09.094256  211567 logs.go:282] 1 containers: [b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3]
	I1126 20:19:09.094305  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:19:09.097950  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:19:09.098004  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:19:09.122640  211567 cri.go:89] found id: ""
	I1126 20:19:09.122658  211567 logs.go:282] 0 containers: []
	W1126 20:19:09.122666  211567 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:19:09.122673  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:19:09.122720  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:19:09.146527  211567 cri.go:89] found id: "51387f3f558f400fa925e734509a59c5b7fb4a439a0755f775b3dc32fe473dfd"
	I1126 20:19:09.146545  211567 cri.go:89] found id: "3103afa4dcf3997e79a2e518b6cce49391521b3090547b6d5132026790ec4e99"
	I1126 20:19:09.146550  211567 cri.go:89] found id: ""
	I1126 20:19:09.146556  211567 logs.go:282] 2 containers: [51387f3f558f400fa925e734509a59c5b7fb4a439a0755f775b3dc32fe473dfd 3103afa4dcf3997e79a2e518b6cce49391521b3090547b6d5132026790ec4e99]
	I1126 20:19:09.146592  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:19:09.150015  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:19:09.153178  211567 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:19:09.153228  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:19:09.177628  211567 cri.go:89] found id: ""
	I1126 20:19:09.177647  211567 logs.go:282] 0 containers: []
	W1126 20:19:09.177653  211567 logs.go:284] No container was found matching "kindnet"
	I1126 20:19:09.177658  211567 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:19:09.177694  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:19:09.202603  211567 cri.go:89] found id: ""
	I1126 20:19:09.202629  211567 logs.go:282] 0 containers: []
	W1126 20:19:09.202638  211567 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:19:09.202652  211567 logs.go:123] Gathering logs for dmesg ...
	I1126 20:19:09.202705  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:19:09.215530  211567 logs.go:123] Gathering logs for kube-apiserver [6cf1c8fac8af6cb3b54d449362e57e6c614df1ac99f2ba2252204f5ea4fdffd7] ...
	I1126 20:19:09.215548  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6cf1c8fac8af6cb3b54d449362e57e6c614df1ac99f2ba2252204f5ea4fdffd7"
	I1126 20:19:09.244540  211567 logs.go:123] Gathering logs for kube-scheduler [b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3] ...
	I1126 20:19:09.244561  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3"
	I1126 20:19:09.286426  211567 logs.go:123] Gathering logs for kube-controller-manager [3103afa4dcf3997e79a2e518b6cce49391521b3090547b6d5132026790ec4e99] ...
	I1126 20:19:09.286449  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3103afa4dcf3997e79a2e518b6cce49391521b3090547b6d5132026790ec4e99"
	I1126 20:19:09.311516  211567 logs.go:123] Gathering logs for kubelet ...
	I1126 20:19:09.311540  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:19:09.377246  211567 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:19:09.377276  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:19:09.434326  211567 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:19:09.434348  211567 logs.go:123] Gathering logs for kube-apiserver [523b247ab54f1039c8bcaf648b82d25fb8007790982a54b6d1fc38499748378b] ...
	I1126 20:19:09.434362  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 523b247ab54f1039c8bcaf648b82d25fb8007790982a54b6d1fc38499748378b"
	W1126 20:19:09.458605  211567 logs.go:130] failed kube-apiserver [523b247ab54f1039c8bcaf648b82d25fb8007790982a54b6d1fc38499748378b]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 523b247ab54f1039c8bcaf648b82d25fb8007790982a54b6d1fc38499748378b" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 523b247ab54f1039c8bcaf648b82d25fb8007790982a54b6d1fc38499748378b": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:19:09.456774    1638 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"523b247ab54f1039c8bcaf648b82d25fb8007790982a54b6d1fc38499748378b\": container with ID starting with 523b247ab54f1039c8bcaf648b82d25fb8007790982a54b6d1fc38499748378b not found: ID does not exist" containerID="523b247ab54f1039c8bcaf648b82d25fb8007790982a54b6d1fc38499748378b"
	time="2025-11-26T20:19:09Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"523b247ab54f1039c8bcaf648b82d25fb8007790982a54b6d1fc38499748378b\": container with ID starting with 523b247ab54f1039c8bcaf648b82d25fb8007790982a54b6d1fc38499748378b not found: ID does not exist"
	 output: 
	** stderr ** 
	E1126 20:19:09.456774    1638 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"523b247ab54f1039c8bcaf648b82d25fb8007790982a54b6d1fc38499748378b\": container with ID starting with 523b247ab54f1039c8bcaf648b82d25fb8007790982a54b6d1fc38499748378b not found: ID does not exist" containerID="523b247ab54f1039c8bcaf648b82d25fb8007790982a54b6d1fc38499748378b"
	time="2025-11-26T20:19:09Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"523b247ab54f1039c8bcaf648b82d25fb8007790982a54b6d1fc38499748378b\": container with ID starting with 523b247ab54f1039c8bcaf648b82d25fb8007790982a54b6d1fc38499748378b not found: ID does not exist"
	
	** /stderr **
	I1126 20:19:09.458647  211567 logs.go:123] Gathering logs for kube-controller-manager [51387f3f558f400fa925e734509a59c5b7fb4a439a0755f775b3dc32fe473dfd] ...
	I1126 20:19:09.458660  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 51387f3f558f400fa925e734509a59c5b7fb4a439a0755f775b3dc32fe473dfd"
	I1126 20:19:09.486848  211567 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:19:09.486877  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:19:09.533482  211567 logs.go:123] Gathering logs for container status ...
	I1126 20:19:09.533514  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:19:10.747592  216504 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1126 20:19:10.748036  216504 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1126 20:19:10.748085  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:19:10.748138  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:19:10.785050  216504 cri.go:89] found id: "c117c40490d4ebb74e39a760ecb03bec1f10322fb9b123707b25ece8a272d4e8"
	I1126 20:19:10.785075  216504 cri.go:89] found id: "d968e5f401ed935c9528289bf11cd900ffefc9bdaf60188dac737cc7d9761652"
	I1126 20:19:10.785080  216504 cri.go:89] found id: ""
	I1126 20:19:10.785090  216504 logs.go:282] 2 containers: [c117c40490d4ebb74e39a760ecb03bec1f10322fb9b123707b25ece8a272d4e8 d968e5f401ed935c9528289bf11cd900ffefc9bdaf60188dac737cc7d9761652]
	I1126 20:19:10.785153  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:19:10.789296  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:19:10.792650  216504 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:19:10.792719  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:19:10.829546  216504 cri.go:89] found id: "e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280"
	I1126 20:19:10.829568  216504 cri.go:89] found id: ""
	I1126 20:19:10.829579  216504 logs.go:282] 1 containers: [e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280]
	I1126 20:19:10.829631  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:19:10.833126  216504 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:19:10.833191  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:19:10.871555  216504 cri.go:89] found id: ""
	I1126 20:19:10.871585  216504 logs.go:282] 0 containers: []
	W1126 20:19:10.871595  216504 logs.go:284] No container was found matching "coredns"
	I1126 20:19:10.871603  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:19:10.871669  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:19:10.912858  216504 cri.go:89] found id: "b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915"
	I1126 20:19:10.912883  216504 cri.go:89] found id: ""
	I1126 20:19:10.912894  216504 logs.go:282] 1 containers: [b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915]
	I1126 20:19:10.912953  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:19:10.916771  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:19:10.916830  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:19:10.954169  216504 cri.go:89] found id: ""
	I1126 20:19:10.954196  216504 logs.go:282] 0 containers: []
	W1126 20:19:10.954207  216504 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:19:10.954214  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:19:10.954369  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:19:11.000452  216504 cri.go:89] found id: "be4b02e604ec5b37aa81ef10164c4129b716578a9e391854144c19256393ef00"
	I1126 20:19:11.000535  216504 cri.go:89] found id: "bee7c6bd6c5b1641b497275c1c2fc53ecfbe6a264a50047de50e8c284652fb16"
	I1126 20:19:11.000543  216504 cri.go:89] found id: ""
	I1126 20:19:11.000552  216504 logs.go:282] 2 containers: [be4b02e604ec5b37aa81ef10164c4129b716578a9e391854144c19256393ef00 bee7c6bd6c5b1641b497275c1c2fc53ecfbe6a264a50047de50e8c284652fb16]
	I1126 20:19:11.000609  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:19:11.004579  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:19:11.008189  216504 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:19:11.008240  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:19:11.048321  216504 cri.go:89] found id: ""
	I1126 20:19:11.048345  216504 logs.go:282] 0 containers: []
	W1126 20:19:11.048352  216504 logs.go:284] No container was found matching "kindnet"
	I1126 20:19:11.048357  216504 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:19:11.048399  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:19:11.093540  216504 cri.go:89] found id: ""
	I1126 20:19:11.093561  216504 logs.go:282] 0 containers: []
	W1126 20:19:11.093567  216504 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:19:11.093576  216504 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:19:11.093588  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:19:11.159933  216504 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:19:11.159955  216504 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:19:11.159971  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:19:11.196995  216504 logs.go:123] Gathering logs for container status ...
	I1126 20:19:11.197021  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:19:11.244041  216504 logs.go:123] Gathering logs for dmesg ...
	I1126 20:19:11.244088  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:19:11.261173  216504 logs.go:123] Gathering logs for kube-apiserver [c117c40490d4ebb74e39a760ecb03bec1f10322fb9b123707b25ece8a272d4e8] ...
	I1126 20:19:11.261199  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c117c40490d4ebb74e39a760ecb03bec1f10322fb9b123707b25ece8a272d4e8"
	I1126 20:19:11.303161  216504 logs.go:123] Gathering logs for kube-apiserver [d968e5f401ed935c9528289bf11cd900ffefc9bdaf60188dac737cc7d9761652] ...
	I1126 20:19:11.303194  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d968e5f401ed935c9528289bf11cd900ffefc9bdaf60188dac737cc7d9761652"
	W1126 20:19:11.347076  216504 logs.go:130] failed kube-apiserver [d968e5f401ed935c9528289bf11cd900ffefc9bdaf60188dac737cc7d9761652]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d968e5f401ed935c9528289bf11cd900ffefc9bdaf60188dac737cc7d9761652" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d968e5f401ed935c9528289bf11cd900ffefc9bdaf60188dac737cc7d9761652": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:19:11.344073    1889 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d968e5f401ed935c9528289bf11cd900ffefc9bdaf60188dac737cc7d9761652\": container with ID starting with d968e5f401ed935c9528289bf11cd900ffefc9bdaf60188dac737cc7d9761652 not found: ID does not exist" containerID="d968e5f401ed935c9528289bf11cd900ffefc9bdaf60188dac737cc7d9761652"
	time="2025-11-26T20:19:11Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"d968e5f401ed935c9528289bf11cd900ffefc9bdaf60188dac737cc7d9761652\": container with ID starting with d968e5f401ed935c9528289bf11cd900ffefc9bdaf60188dac737cc7d9761652 not found: ID does not exist"
	 output: 
	** stderr ** 
	E1126 20:19:11.344073    1889 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d968e5f401ed935c9528289bf11cd900ffefc9bdaf60188dac737cc7d9761652\": container with ID starting with d968e5f401ed935c9528289bf11cd900ffefc9bdaf60188dac737cc7d9761652 not found: ID does not exist" containerID="d968e5f401ed935c9528289bf11cd900ffefc9bdaf60188dac737cc7d9761652"
	time="2025-11-26T20:19:11Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"d968e5f401ed935c9528289bf11cd900ffefc9bdaf60188dac737cc7d9761652\": container with ID starting with d968e5f401ed935c9528289bf11cd900ffefc9bdaf60188dac737cc7d9761652 not found: ID does not exist"
	
	** /stderr **
	I1126 20:19:11.347103  216504 logs.go:123] Gathering logs for etcd [e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280] ...
	I1126 20:19:11.347119  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280"
	I1126 20:19:11.391031  216504 logs.go:123] Gathering logs for kube-scheduler [b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915] ...
	I1126 20:19:11.391066  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915"
	I1126 20:19:11.457609  216504 logs.go:123] Gathering logs for kube-controller-manager [be4b02e604ec5b37aa81ef10164c4129b716578a9e391854144c19256393ef00] ...
	I1126 20:19:11.457642  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be4b02e604ec5b37aa81ef10164c4129b716578a9e391854144c19256393ef00"
	I1126 20:19:11.494708  216504 logs.go:123] Gathering logs for kube-controller-manager [bee7c6bd6c5b1641b497275c1c2fc53ecfbe6a264a50047de50e8c284652fb16] ...
	I1126 20:19:11.494735  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bee7c6bd6c5b1641b497275c1c2fc53ecfbe6a264a50047de50e8c284652fb16"
	I1126 20:19:11.531411  216504 logs.go:123] Gathering logs for kubelet ...
	I1126 20:19:11.531438  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:19:10.585418  236328 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1126 20:19:10.585629  236328 start.go:159] libmachine.API.Create for "old-k8s-version-157431" (driver="docker")
	I1126 20:19:10.585665  236328 client.go:173] LocalClient.Create starting
	I1126 20:19:10.585733  236328 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem
	I1126 20:19:10.585766  236328 main.go:143] libmachine: Decoding PEM data...
	I1126 20:19:10.585787  236328 main.go:143] libmachine: Parsing certificate...
	I1126 20:19:10.585839  236328 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem
	I1126 20:19:10.585873  236328 main.go:143] libmachine: Decoding PEM data...
	I1126 20:19:10.585886  236328 main.go:143] libmachine: Parsing certificate...
	I1126 20:19:10.586179  236328 cli_runner.go:164] Run: docker network inspect old-k8s-version-157431 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1126 20:19:10.602287  236328 cli_runner.go:211] docker network inspect old-k8s-version-157431 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1126 20:19:10.602340  236328 network_create.go:284] running [docker network inspect old-k8s-version-157431] to gather additional debugging logs...
	I1126 20:19:10.602355  236328 cli_runner.go:164] Run: docker network inspect old-k8s-version-157431
	W1126 20:19:10.617967  236328 cli_runner.go:211] docker network inspect old-k8s-version-157431 returned with exit code 1
	I1126 20:19:10.617998  236328 network_create.go:287] error running [docker network inspect old-k8s-version-157431]: docker network inspect old-k8s-version-157431: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-157431 not found
	I1126 20:19:10.618012  236328 network_create.go:289] output of [docker network inspect old-k8s-version-157431]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-157431 not found
	
	** /stderr **
	I1126 20:19:10.618099  236328 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1126 20:19:10.634046  236328 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9f2fbfec5d4b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:aa:65:0e:d3:0d:13} reservation:<nil>}
	I1126 20:19:10.634675  236328 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bbc6aaddce08 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3a:93:ea:3f:0d:7e} reservation:<nil>}
	I1126 20:19:10.635300  236328 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ecd673f5b2f2 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:de:d0:57:1a:06:91} reservation:<nil>}
	I1126 20:19:10.636054  236328 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001dba960}
	I1126 20:19:10.636085  236328 network_create.go:124] attempt to create docker network old-k8s-version-157431 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1126 20:19:10.636127  236328 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-157431 old-k8s-version-157431
	I1126 20:19:10.682958  236328 network_create.go:108] docker network old-k8s-version-157431 192.168.76.0/24 created
	I1126 20:19:10.682987  236328 kic.go:121] calculated static IP "192.168.76.2" for the "old-k8s-version-157431" container
	I1126 20:19:10.683034  236328 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1126 20:19:10.700282  236328 cli_runner.go:164] Run: docker volume create old-k8s-version-157431 --label name.minikube.sigs.k8s.io=old-k8s-version-157431 --label created_by.minikube.sigs.k8s.io=true
	I1126 20:19:10.716176  236328 oci.go:103] Successfully created a docker volume old-k8s-version-157431
	I1126 20:19:10.716267  236328 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-157431-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-157431 --entrypoint /usr/bin/test -v old-k8s-version-157431:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1126 20:19:11.106938  236328 oci.go:107] Successfully prepared a docker volume old-k8s-version-157431
	I1126 20:19:11.107017  236328 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1126 20:19:11.107033  236328 kic.go:194] Starting extracting preloaded images to volume ...
	I1126 20:19:11.107140  236328 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21974-10722/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-157431:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir
	I1126 20:19:12.064506  211567 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1126 20:19:12.064988  211567 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1126 20:19:12.065047  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:19:12.065115  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:19:12.092506  211567 cri.go:89] found id: "6cf1c8fac8af6cb3b54d449362e57e6c614df1ac99f2ba2252204f5ea4fdffd7"
	I1126 20:19:12.092530  211567 cri.go:89] found id: ""
	I1126 20:19:12.092540  211567 logs.go:282] 1 containers: [6cf1c8fac8af6cb3b54d449362e57e6c614df1ac99f2ba2252204f5ea4fdffd7]
	I1126 20:19:12.092598  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:19:12.096426  211567 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:19:12.096494  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:19:12.122635  211567 cri.go:89] found id: ""
	I1126 20:19:12.122660  211567 logs.go:282] 0 containers: []
	W1126 20:19:12.122670  211567 logs.go:284] No container was found matching "etcd"
	I1126 20:19:12.122677  211567 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:19:12.122732  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:19:12.150068  211567 cri.go:89] found id: ""
	I1126 20:19:12.150089  211567 logs.go:282] 0 containers: []
	W1126 20:19:12.150113  211567 logs.go:284] No container was found matching "coredns"
	I1126 20:19:12.150124  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:19:12.150165  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:19:12.176892  211567 cri.go:89] found id: "b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3"
	I1126 20:19:12.176911  211567 cri.go:89] found id: ""
	I1126 20:19:12.176918  211567 logs.go:282] 1 containers: [b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3]
	I1126 20:19:12.176961  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:19:12.180694  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:19:12.180760  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:19:12.206040  211567 cri.go:89] found id: ""
	I1126 20:19:12.206065  211567 logs.go:282] 0 containers: []
	W1126 20:19:12.206082  211567 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:19:12.206088  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:19:12.206130  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:19:12.235420  211567 cri.go:89] found id: "51387f3f558f400fa925e734509a59c5b7fb4a439a0755f775b3dc32fe473dfd"
	I1126 20:19:12.235443  211567 cri.go:89] found id: "3103afa4dcf3997e79a2e518b6cce49391521b3090547b6d5132026790ec4e99"
	I1126 20:19:12.235450  211567 cri.go:89] found id: ""
	I1126 20:19:12.235489  211567 logs.go:282] 2 containers: [51387f3f558f400fa925e734509a59c5b7fb4a439a0755f775b3dc32fe473dfd 3103afa4dcf3997e79a2e518b6cce49391521b3090547b6d5132026790ec4e99]
	I1126 20:19:12.235553  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:19:12.239415  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:19:12.242930  211567 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:19:12.242976  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:19:12.268970  211567 cri.go:89] found id: ""
	I1126 20:19:12.268997  211567 logs.go:282] 0 containers: []
	W1126 20:19:12.269006  211567 logs.go:284] No container was found matching "kindnet"
	I1126 20:19:12.269012  211567 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:19:12.269070  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:19:12.297056  211567 cri.go:89] found id: ""
	I1126 20:19:12.297084  211567 logs.go:282] 0 containers: []
	W1126 20:19:12.297094  211567 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:19:12.297118  211567 logs.go:123] Gathering logs for dmesg ...
	I1126 20:19:12.297129  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:19:12.312293  211567 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:19:12.312319  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:19:12.366273  211567 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:19:12.366294  211567 logs.go:123] Gathering logs for kube-apiserver [6cf1c8fac8af6cb3b54d449362e57e6c614df1ac99f2ba2252204f5ea4fdffd7] ...
	I1126 20:19:12.366310  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6cf1c8fac8af6cb3b54d449362e57e6c614df1ac99f2ba2252204f5ea4fdffd7"
	I1126 20:19:12.402259  211567 logs.go:123] Gathering logs for kube-controller-manager [51387f3f558f400fa925e734509a59c5b7fb4a439a0755f775b3dc32fe473dfd] ...
	I1126 20:19:12.402296  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 51387f3f558f400fa925e734509a59c5b7fb4a439a0755f775b3dc32fe473dfd"
	I1126 20:19:12.428887  211567 logs.go:123] Gathering logs for kube-controller-manager [3103afa4dcf3997e79a2e518b6cce49391521b3090547b6d5132026790ec4e99] ...
	I1126 20:19:12.428921  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3103afa4dcf3997e79a2e518b6cce49391521b3090547b6d5132026790ec4e99"
	I1126 20:19:12.456188  211567 logs.go:123] Gathering logs for container status ...
	I1126 20:19:12.456212  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:19:12.487115  211567 logs.go:123] Gathering logs for kubelet ...
	I1126 20:19:12.487149  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:19:12.554497  211567 logs.go:123] Gathering logs for kube-scheduler [b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3] ...
	I1126 20:19:12.554530  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3"
	I1126 20:19:12.599578  211567 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:19:12.599606  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:19:15.143375  211567 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1126 20:19:15.143783  211567 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1126 20:19:15.143826  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:19:15.143888  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:19:15.169891  211567 cri.go:89] found id: "6cf1c8fac8af6cb3b54d449362e57e6c614df1ac99f2ba2252204f5ea4fdffd7"
	I1126 20:19:15.169910  211567 cri.go:89] found id: ""
	I1126 20:19:15.169917  211567 logs.go:282] 1 containers: [6cf1c8fac8af6cb3b54d449362e57e6c614df1ac99f2ba2252204f5ea4fdffd7]
	I1126 20:19:15.169968  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:19:15.173723  211567 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:19:15.173780  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:19:15.199636  211567 cri.go:89] found id: ""
	I1126 20:19:15.199671  211567 logs.go:282] 0 containers: []
	W1126 20:19:15.199681  211567 logs.go:284] No container was found matching "etcd"
	I1126 20:19:15.199690  211567 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:19:15.199741  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:19:15.225569  211567 cri.go:89] found id: ""
	I1126 20:19:15.225596  211567 logs.go:282] 0 containers: []
	W1126 20:19:15.225604  211567 logs.go:284] No container was found matching "coredns"
	I1126 20:19:15.225610  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:19:15.225659  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:19:15.250833  211567 cri.go:89] found id: "b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3"
	I1126 20:19:15.250858  211567 cri.go:89] found id: ""
	I1126 20:19:15.250868  211567 logs.go:282] 1 containers: [b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3]
	I1126 20:19:15.250918  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:19:15.254495  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:19:15.254556  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:19:15.281377  211567 cri.go:89] found id: ""
	I1126 20:19:15.281406  211567 logs.go:282] 0 containers: []
	W1126 20:19:15.281417  211567 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:19:15.281425  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:19:15.281504  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:19:15.307080  211567 cri.go:89] found id: "51387f3f558f400fa925e734509a59c5b7fb4a439a0755f775b3dc32fe473dfd"
	I1126 20:19:15.307099  211567 cri.go:89] found id: "3103afa4dcf3997e79a2e518b6cce49391521b3090547b6d5132026790ec4e99"
	I1126 20:19:15.307103  211567 cri.go:89] found id: ""
	I1126 20:19:15.307114  211567 logs.go:282] 2 containers: [51387f3f558f400fa925e734509a59c5b7fb4a439a0755f775b3dc32fe473dfd 3103afa4dcf3997e79a2e518b6cce49391521b3090547b6d5132026790ec4e99]
	I1126 20:19:15.307166  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:19:15.310914  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:19:15.314379  211567 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:19:15.314484  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:19:15.340420  211567 cri.go:89] found id: ""
	I1126 20:19:15.340443  211567 logs.go:282] 0 containers: []
	W1126 20:19:15.340451  211567 logs.go:284] No container was found matching "kindnet"
	I1126 20:19:15.340467  211567 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:19:15.340520  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:19:15.366919  211567 cri.go:89] found id: ""
	I1126 20:19:15.366942  211567 logs.go:282] 0 containers: []
	W1126 20:19:15.366951  211567 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:19:15.366970  211567 logs.go:123] Gathering logs for dmesg ...
	I1126 20:19:15.366983  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:19:15.380487  211567 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:19:15.380512  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:19:15.433352  211567 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:19:15.433390  211567 logs.go:123] Gathering logs for kube-controller-manager [51387f3f558f400fa925e734509a59c5b7fb4a439a0755f775b3dc32fe473dfd] ...
	I1126 20:19:15.433402  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 51387f3f558f400fa925e734509a59c5b7fb4a439a0755f775b3dc32fe473dfd"
	I1126 20:19:15.458859  211567 logs.go:123] Gathering logs for kube-controller-manager [3103afa4dcf3997e79a2e518b6cce49391521b3090547b6d5132026790ec4e99] ...
	I1126 20:19:15.458884  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3103afa4dcf3997e79a2e518b6cce49391521b3090547b6d5132026790ec4e99"
	I1126 20:19:15.484285  211567 logs.go:123] Gathering logs for kubelet ...
	I1126 20:19:15.484313  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:19:15.553664  211567 logs.go:123] Gathering logs for kube-apiserver [6cf1c8fac8af6cb3b54d449362e57e6c614df1ac99f2ba2252204f5ea4fdffd7] ...
	I1126 20:19:15.553690  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6cf1c8fac8af6cb3b54d449362e57e6c614df1ac99f2ba2252204f5ea4fdffd7"
	I1126 20:19:15.583289  211567 logs.go:123] Gathering logs for kube-scheduler [b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3] ...
	I1126 20:19:15.583313  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3"
	I1126 20:19:15.627106  211567 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:19:15.627134  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:19:15.668876  211567 logs.go:123] Gathering logs for container status ...
	I1126 20:19:15.668909  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:19:14.130257  216504 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1126 20:19:14.130622  216504 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1126 20:19:14.130670  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:19:14.130715  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:19:14.164147  216504 cri.go:89] found id: "c117c40490d4ebb74e39a760ecb03bec1f10322fb9b123707b25ece8a272d4e8"
	I1126 20:19:14.164168  216504 cri.go:89] found id: ""
	I1126 20:19:14.164177  216504 logs.go:282] 1 containers: [c117c40490d4ebb74e39a760ecb03bec1f10322fb9b123707b25ece8a272d4e8]
	I1126 20:19:14.164238  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:19:14.167690  216504 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:19:14.167740  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:19:14.201389  216504 cri.go:89] found id: "e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280"
	I1126 20:19:14.201413  216504 cri.go:89] found id: ""
	I1126 20:19:14.201424  216504 logs.go:282] 1 containers: [e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280]
	I1126 20:19:14.201486  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:19:14.204973  216504 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:19:14.205028  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:19:14.237300  216504 cri.go:89] found id: ""
	I1126 20:19:14.237323  216504 logs.go:282] 0 containers: []
	W1126 20:19:14.237332  216504 logs.go:284] No container was found matching "coredns"
	I1126 20:19:14.237340  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:19:14.237394  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:19:14.272336  216504 cri.go:89] found id: "b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915"
	I1126 20:19:14.272358  216504 cri.go:89] found id: ""
	I1126 20:19:14.272366  216504 logs.go:282] 1 containers: [b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915]
	I1126 20:19:14.272413  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:19:14.276011  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:19:14.276069  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:19:14.308034  216504 cri.go:89] found id: ""
	I1126 20:19:14.308059  216504 logs.go:282] 0 containers: []
	W1126 20:19:14.308066  216504 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:19:14.308072  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:19:14.308135  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:19:14.339870  216504 cri.go:89] found id: "be4b02e604ec5b37aa81ef10164c4129b716578a9e391854144c19256393ef00"
	I1126 20:19:14.339893  216504 cri.go:89] found id: ""
	I1126 20:19:14.339904  216504 logs.go:282] 1 containers: [be4b02e604ec5b37aa81ef10164c4129b716578a9e391854144c19256393ef00]
	I1126 20:19:14.339956  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:19:14.343349  216504 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:19:14.343400  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:19:14.375763  216504 cri.go:89] found id: ""
	I1126 20:19:14.375787  216504 logs.go:282] 0 containers: []
	W1126 20:19:14.375794  216504 logs.go:284] No container was found matching "kindnet"
	I1126 20:19:14.375799  216504 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:19:14.375854  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:19:14.409757  216504 cri.go:89] found id: ""
	I1126 20:19:14.409783  216504 logs.go:282] 0 containers: []
	W1126 20:19:14.409791  216504 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:19:14.409810  216504 logs.go:123] Gathering logs for kubelet ...
	I1126 20:19:14.409825  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:19:14.490307  216504 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:19:14.490338  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:19:14.550618  216504 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:19:14.550644  216504 logs.go:123] Gathering logs for etcd [e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280] ...
	I1126 20:19:14.550659  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280"
	I1126 20:19:14.584367  216504 logs.go:123] Gathering logs for kube-scheduler [b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915] ...
	I1126 20:19:14.584396  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915"
	I1126 20:19:14.648190  216504 logs.go:123] Gathering logs for kube-controller-manager [be4b02e604ec5b37aa81ef10164c4129b716578a9e391854144c19256393ef00] ...
	I1126 20:19:14.648218  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be4b02e604ec5b37aa81ef10164c4129b716578a9e391854144c19256393ef00"
	I1126 20:19:14.681294  216504 logs.go:123] Gathering logs for dmesg ...
	I1126 20:19:14.681317  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:19:14.696527  216504 logs.go:123] Gathering logs for kube-apiserver [c117c40490d4ebb74e39a760ecb03bec1f10322fb9b123707b25ece8a272d4e8] ...
	I1126 20:19:14.696551  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c117c40490d4ebb74e39a760ecb03bec1f10322fb9b123707b25ece8a272d4e8"
	I1126 20:19:14.731256  216504 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:19:14.731282  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:19:14.768746  216504 logs.go:123] Gathering logs for container status ...
	I1126 20:19:14.768772  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:19:17.306860  216504 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1126 20:19:17.307240  216504 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1126 20:19:17.307292  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:19:17.307339  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:19:17.340952  216504 cri.go:89] found id: "c117c40490d4ebb74e39a760ecb03bec1f10322fb9b123707b25ece8a272d4e8"
	I1126 20:19:17.340972  216504 cri.go:89] found id: ""
	I1126 20:19:17.340980  216504 logs.go:282] 1 containers: [c117c40490d4ebb74e39a760ecb03bec1f10322fb9b123707b25ece8a272d4e8]
	I1126 20:19:17.341026  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:19:17.344518  216504 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:19:17.344566  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:19:17.377031  216504 cri.go:89] found id: "e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280"
	I1126 20:19:17.377049  216504 cri.go:89] found id: ""
	I1126 20:19:17.377056  216504 logs.go:282] 1 containers: [e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280]
	I1126 20:19:17.377103  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:19:17.380645  216504 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:19:17.380704  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:19:17.413448  216504 cri.go:89] found id: ""
	I1126 20:19:17.413482  216504 logs.go:282] 0 containers: []
	W1126 20:19:17.413493  216504 logs.go:284] No container was found matching "coredns"
	I1126 20:19:17.413500  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:19:17.413543  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:19:17.445542  216504 cri.go:89] found id: "b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915"
	I1126 20:19:17.445562  216504 cri.go:89] found id: ""
	I1126 20:19:17.445572  216504 logs.go:282] 1 containers: [b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915]
	I1126 20:19:17.445630  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:19:17.449389  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:19:17.449445  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:19:17.482474  216504 cri.go:89] found id: ""
	I1126 20:19:17.482499  216504 logs.go:282] 0 containers: []
	W1126 20:19:17.482510  216504 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:19:17.482518  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:19:17.482563  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:19:17.515095  216504 cri.go:89] found id: "be4b02e604ec5b37aa81ef10164c4129b716578a9e391854144c19256393ef00"
	I1126 20:19:17.515117  216504 cri.go:89] found id: ""
	I1126 20:19:17.515127  216504 logs.go:282] 1 containers: [be4b02e604ec5b37aa81ef10164c4129b716578a9e391854144c19256393ef00]
	I1126 20:19:17.515184  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:19:17.518552  216504 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:19:17.518606  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:19:17.550017  216504 cri.go:89] found id: ""
	I1126 20:19:17.550041  216504 logs.go:282] 0 containers: []
	W1126 20:19:17.550050  216504 logs.go:284] No container was found matching "kindnet"
	I1126 20:19:17.550057  216504 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:19:17.550108  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:19:17.582385  216504 cri.go:89] found id: ""
	I1126 20:19:17.582409  216504 logs.go:282] 0 containers: []
	W1126 20:19:17.582416  216504 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:19:17.582430  216504 logs.go:123] Gathering logs for kube-scheduler [b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915] ...
	I1126 20:19:17.582442  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915"
	I1126 20:19:17.646927  216504 logs.go:123] Gathering logs for kube-controller-manager [be4b02e604ec5b37aa81ef10164c4129b716578a9e391854144c19256393ef00] ...
	I1126 20:19:17.646958  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be4b02e604ec5b37aa81ef10164c4129b716578a9e391854144c19256393ef00"
	I1126 20:19:17.680417  216504 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:19:17.680442  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:19:17.716732  216504 logs.go:123] Gathering logs for container status ...
	I1126 20:19:17.716759  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:19:17.752494  216504 logs.go:123] Gathering logs for dmesg ...
	I1126 20:19:17.752519  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:19:17.767175  216504 logs.go:123] Gathering logs for kube-apiserver [c117c40490d4ebb74e39a760ecb03bec1f10322fb9b123707b25ece8a272d4e8] ...
	I1126 20:19:17.767198  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c117c40490d4ebb74e39a760ecb03bec1f10322fb9b123707b25ece8a272d4e8"
	I1126 20:19:17.803985  216504 logs.go:123] Gathering logs for etcd [e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280] ...
	I1126 20:19:17.804012  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280"
	I1126 20:19:17.835545  216504 logs.go:123] Gathering logs for kubelet ...
	I1126 20:19:17.835577  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:19:17.914130  216504 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:19:17.914158  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:19:17.970423  216504 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:19:16.037347  236328 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21974-10722/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-157431:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir: (4.930168409s)
	I1126 20:19:16.037375  236328 kic.go:203] duration metric: took 4.930341078s to extract preloaded images to volume ...
	W1126 20:19:16.037443  236328 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1126 20:19:16.037503  236328 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1126 20:19:16.037549  236328 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1126 20:19:16.095018  236328 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-157431 --name old-k8s-version-157431 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-157431 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-157431 --network old-k8s-version-157431 --ip 192.168.76.2 --volume old-k8s-version-157431:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1126 20:19:16.379156  236328 cli_runner.go:164] Run: docker container inspect old-k8s-version-157431 --format={{.State.Running}}
	I1126 20:19:16.398163  236328 cli_runner.go:164] Run: docker container inspect old-k8s-version-157431 --format={{.State.Status}}
	I1126 20:19:16.415108  236328 cli_runner.go:164] Run: docker exec old-k8s-version-157431 stat /var/lib/dpkg/alternatives/iptables
	I1126 20:19:16.463351  236328 oci.go:144] the created container "old-k8s-version-157431" has a running status.
	I1126 20:19:16.463379  236328 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21974-10722/.minikube/machines/old-k8s-version-157431/id_rsa...
	I1126 20:19:16.480346  236328 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21974-10722/.minikube/machines/old-k8s-version-157431/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1126 20:19:16.505389  236328 cli_runner.go:164] Run: docker container inspect old-k8s-version-157431 --format={{.State.Status}}
	I1126 20:19:16.529986  236328 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1126 20:19:16.530008  236328 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-157431 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1126 20:19:16.576985  236328 cli_runner.go:164] Run: docker container inspect old-k8s-version-157431 --format={{.State.Status}}
	I1126 20:19:16.595841  236328 machine.go:94] provisionDockerMachine start ...
	I1126 20:19:16.595937  236328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-157431
	I1126 20:19:16.617007  236328 main.go:143] libmachine: Using SSH client type: native
	I1126 20:19:16.617363  236328 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1126 20:19:16.617383  236328 main.go:143] libmachine: About to run SSH command:
	hostname
	I1126 20:19:16.618024  236328 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40348->127.0.0.1:33053: read: connection reset by peer
	I1126 20:19:19.757002  236328 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-157431
	
	I1126 20:19:19.757044  236328 ubuntu.go:182] provisioning hostname "old-k8s-version-157431"
	I1126 20:19:19.757110  236328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-157431
	I1126 20:19:19.774981  236328 main.go:143] libmachine: Using SSH client type: native
	I1126 20:19:19.775197  236328 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1126 20:19:19.775212  236328 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-157431 && echo "old-k8s-version-157431" | sudo tee /etc/hostname
	I1126 20:19:19.920301  236328 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-157431
	
	I1126 20:19:19.920390  236328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-157431
	I1126 20:19:19.937678  236328 main.go:143] libmachine: Using SSH client type: native
	I1126 20:19:19.937951  236328 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1126 20:19:19.937977  236328 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-157431' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-157431/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-157431' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1126 20:19:20.075101  236328 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1126 20:19:20.075144  236328 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21974-10722/.minikube CaCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21974-10722/.minikube}
	I1126 20:19:20.075167  236328 ubuntu.go:190] setting up certificates
	I1126 20:19:20.075178  236328 provision.go:84] configureAuth start
	I1126 20:19:20.075246  236328 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-157431
	I1126 20:19:20.093651  236328 provision.go:143] copyHostCerts
	I1126 20:19:20.093702  236328 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-10722/.minikube/ca.pem, removing ...
	I1126 20:19:20.093713  236328 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-10722/.minikube/ca.pem
	I1126 20:19:20.093774  236328 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/ca.pem (1078 bytes)
	I1126 20:19:20.093861  236328 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-10722/.minikube/cert.pem, removing ...
	I1126 20:19:20.093869  236328 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-10722/.minikube/cert.pem
	I1126 20:19:20.093895  236328 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/cert.pem (1123 bytes)
	I1126 20:19:20.093965  236328 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-10722/.minikube/key.pem, removing ...
	I1126 20:19:20.093973  236328 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-10722/.minikube/key.pem
	I1126 20:19:20.094002  236328 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/key.pem (1675 bytes)
	I1126 20:19:20.094110  236328 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-157431 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-157431]
	I1126 20:19:20.141483  236328 provision.go:177] copyRemoteCerts
	I1126 20:19:20.141545  236328 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1126 20:19:20.141586  236328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-157431
	I1126 20:19:20.157959  236328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/old-k8s-version-157431/id_rsa Username:docker}
	I1126 20:19:20.254895  236328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1126 20:19:20.274312  236328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1126 20:19:20.290583  236328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1126 20:19:20.306828  236328 provision.go:87] duration metric: took 231.619857ms to configureAuth
	I1126 20:19:20.306852  236328 ubuntu.go:206] setting minikube options for container-runtime
	I1126 20:19:20.307048  236328 config.go:182] Loaded profile config "old-k8s-version-157431": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1126 20:19:20.307179  236328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-157431
	I1126 20:19:20.324578  236328 main.go:143] libmachine: Using SSH client type: native
	I1126 20:19:20.324862  236328 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1126 20:19:20.324889  236328 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1126 20:19:18.200519  211567 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1126 20:19:18.200897  211567 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1126 20:19:18.200946  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:19:18.201011  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:19:18.226725  211567 cri.go:89] found id: "6cf1c8fac8af6cb3b54d449362e57e6c614df1ac99f2ba2252204f5ea4fdffd7"
	I1126 20:19:18.226740  211567 cri.go:89] found id: ""
	I1126 20:19:18.226748  211567 logs.go:282] 1 containers: [6cf1c8fac8af6cb3b54d449362e57e6c614df1ac99f2ba2252204f5ea4fdffd7]
	I1126 20:19:18.226800  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:19:18.230418  211567 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:19:18.230480  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:19:18.254754  211567 cri.go:89] found id: ""
	I1126 20:19:18.254778  211567 logs.go:282] 0 containers: []
	W1126 20:19:18.254800  211567 logs.go:284] No container was found matching "etcd"
	I1126 20:19:18.254808  211567 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:19:18.254856  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:19:18.280470  211567 cri.go:89] found id: ""
	I1126 20:19:18.280493  211567 logs.go:282] 0 containers: []
	W1126 20:19:18.280503  211567 logs.go:284] No container was found matching "coredns"
	I1126 20:19:18.280509  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:19:18.280548  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:19:18.305253  211567 cri.go:89] found id: "b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3"
	I1126 20:19:18.305269  211567 cri.go:89] found id: ""
	I1126 20:19:18.305278  211567 logs.go:282] 1 containers: [b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3]
	I1126 20:19:18.305345  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:19:18.308916  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:19:18.308965  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:19:18.333024  211567 cri.go:89] found id: ""
	I1126 20:19:18.333045  211567 logs.go:282] 0 containers: []
	W1126 20:19:18.333055  211567 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:19:18.333062  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:19:18.333116  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:19:18.357374  211567 cri.go:89] found id: "51387f3f558f400fa925e734509a59c5b7fb4a439a0755f775b3dc32fe473dfd"
	I1126 20:19:18.357390  211567 cri.go:89] found id: "3103afa4dcf3997e79a2e518b6cce49391521b3090547b6d5132026790ec4e99"
	I1126 20:19:18.357394  211567 cri.go:89] found id: ""
	I1126 20:19:18.357400  211567 logs.go:282] 2 containers: [51387f3f558f400fa925e734509a59c5b7fb4a439a0755f775b3dc32fe473dfd 3103afa4dcf3997e79a2e518b6cce49391521b3090547b6d5132026790ec4e99]
	I1126 20:19:18.357441  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:19:18.361027  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:19:18.364517  211567 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:19:18.364565  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:19:18.388487  211567 cri.go:89] found id: ""
	I1126 20:19:18.388510  211567 logs.go:282] 0 containers: []
	W1126 20:19:18.388519  211567 logs.go:284] No container was found matching "kindnet"
	I1126 20:19:18.388525  211567 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:19:18.388565  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:19:18.412253  211567 cri.go:89] found id: ""
	I1126 20:19:18.412276  211567 logs.go:282] 0 containers: []
	W1126 20:19:18.412285  211567 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:19:18.412315  211567 logs.go:123] Gathering logs for kubelet ...
	I1126 20:19:18.412328  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:19:18.478994  211567 logs.go:123] Gathering logs for dmesg ...
	I1126 20:19:18.479016  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:19:18.492521  211567 logs.go:123] Gathering logs for kube-apiserver [6cf1c8fac8af6cb3b54d449362e57e6c614df1ac99f2ba2252204f5ea4fdffd7] ...
	I1126 20:19:18.492540  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6cf1c8fac8af6cb3b54d449362e57e6c614df1ac99f2ba2252204f5ea4fdffd7"
	I1126 20:19:18.521919  211567 logs.go:123] Gathering logs for kube-scheduler [b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3] ...
	I1126 20:19:18.521944  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3"
	I1126 20:19:18.565342  211567 logs.go:123] Gathering logs for kube-controller-manager [3103afa4dcf3997e79a2e518b6cce49391521b3090547b6d5132026790ec4e99] ...
	I1126 20:19:18.565367  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3103afa4dcf3997e79a2e518b6cce49391521b3090547b6d5132026790ec4e99"
	I1126 20:19:18.588960  211567 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:19:18.588980  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:19:18.630022  211567 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:19:18.630053  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:19:18.683192  211567 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:19:18.683212  211567 logs.go:123] Gathering logs for kube-controller-manager [51387f3f558f400fa925e734509a59c5b7fb4a439a0755f775b3dc32fe473dfd] ...
	I1126 20:19:18.683228  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 51387f3f558f400fa925e734509a59c5b7fb4a439a0755f775b3dc32fe473dfd"
	I1126 20:19:18.708811  211567 logs.go:123] Gathering logs for container status ...
	I1126 20:19:18.708839  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:19:20.602907  236328 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1126 20:19:20.602939  236328 machine.go:97] duration metric: took 4.007076933s to provisionDockerMachine
	I1126 20:19:20.602952  236328 client.go:176] duration metric: took 10.017276912s to LocalClient.Create
	I1126 20:19:20.602976  236328 start.go:167] duration metric: took 10.01734694s to libmachine.API.Create "old-k8s-version-157431"
	I1126 20:19:20.602989  236328 start.go:293] postStartSetup for "old-k8s-version-157431" (driver="docker")
	I1126 20:19:20.603001  236328 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1126 20:19:20.603066  236328 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1126 20:19:20.603119  236328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-157431
	I1126 20:19:20.623805  236328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/old-k8s-version-157431/id_rsa Username:docker}
	I1126 20:19:20.726483  236328 ssh_runner.go:195] Run: cat /etc/os-release
	I1126 20:19:20.730007  236328 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1126 20:19:20.730029  236328 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1126 20:19:20.730038  236328 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-10722/.minikube/addons for local assets ...
	I1126 20:19:20.730092  236328 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-10722/.minikube/files for local assets ...
	I1126 20:19:20.730177  236328 filesync.go:149] local asset: /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem -> 142582.pem in /etc/ssl/certs
	I1126 20:19:20.730285  236328 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1126 20:19:20.737575  236328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem --> /etc/ssl/certs/142582.pem (1708 bytes)
	I1126 20:19:20.757120  236328 start.go:296] duration metric: took 154.11698ms for postStartSetup
	I1126 20:19:20.757495  236328 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-157431
	I1126 20:19:20.776420  236328 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/old-k8s-version-157431/config.json ...
	I1126 20:19:20.776715  236328 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1126 20:19:20.776774  236328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-157431
	I1126 20:19:20.795320  236328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/old-k8s-version-157431/id_rsa Username:docker}
	I1126 20:19:20.891364  236328 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1126 20:19:20.896089  236328 start.go:128] duration metric: took 10.312115952s to createHost
	I1126 20:19:20.896122  236328 start.go:83] releasing machines lock for "old-k8s-version-157431", held for 10.312264219s
	I1126 20:19:20.896219  236328 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-157431
	I1126 20:19:20.916337  236328 ssh_runner.go:195] Run: cat /version.json
	I1126 20:19:20.916392  236328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-157431
	I1126 20:19:20.916436  236328 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1126 20:19:20.916523  236328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-157431
	I1126 20:19:20.935255  236328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/old-k8s-version-157431/id_rsa Username:docker}
	I1126 20:19:20.936124  236328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/old-k8s-version-157431/id_rsa Username:docker}
	I1126 20:19:21.103542  236328 ssh_runner.go:195] Run: systemctl --version
	I1126 20:19:21.111017  236328 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1126 20:19:21.144842  236328 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1126 20:19:21.149519  236328 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1126 20:19:21.149579  236328 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1126 20:19:21.174164  236328 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1126 20:19:21.174185  236328 start.go:496] detecting cgroup driver to use...
	I1126 20:19:21.174215  236328 detect.go:190] detected "systemd" cgroup driver on host os
	I1126 20:19:21.174269  236328 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1126 20:19:21.188618  236328 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1126 20:19:21.199845  236328 docker.go:218] disabling cri-docker service (if available) ...
	I1126 20:19:21.199892  236328 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1126 20:19:21.214954  236328 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1126 20:19:21.230880  236328 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1126 20:19:21.322693  236328 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1126 20:19:21.425363  236328 docker.go:234] disabling docker service ...
	I1126 20:19:21.425411  236328 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1126 20:19:21.443568  236328 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1126 20:19:21.456605  236328 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1126 20:19:21.556652  236328 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1126 20:19:21.645061  236328 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1126 20:19:21.658050  236328 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1126 20:19:21.672756  236328 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1126 20:19:21.672808  236328 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:19:21.682295  236328 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1126 20:19:21.682342  236328 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:19:21.690500  236328 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:19:21.698596  236328 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:19:21.706827  236328 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1126 20:19:21.715225  236328 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:19:21.725161  236328 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:19:21.738024  236328 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:19:21.747402  236328 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1126 20:19:21.754451  236328 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1126 20:19:21.761406  236328 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:19:21.842238  236328 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1126 20:19:21.968653  236328 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1126 20:19:21.968710  236328 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1126 20:19:21.972482  236328 start.go:564] Will wait 60s for crictl version
	I1126 20:19:21.972545  236328 ssh_runner.go:195] Run: which crictl
	I1126 20:19:21.975920  236328 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1126 20:19:21.998822  236328 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1126 20:19:21.998889  236328 ssh_runner.go:195] Run: crio --version
	I1126 20:19:22.026047  236328 ssh_runner.go:195] Run: crio --version
	I1126 20:19:22.053681  236328 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.2 ...
	I1126 20:19:20.470949  216504 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1126 20:19:20.471342  216504 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1126 20:19:20.471400  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:19:20.471452  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:19:20.506338  216504 cri.go:89] found id: "c117c40490d4ebb74e39a760ecb03bec1f10322fb9b123707b25ece8a272d4e8"
	I1126 20:19:20.506361  216504 cri.go:89] found id: ""
	I1126 20:19:20.506370  216504 logs.go:282] 1 containers: [c117c40490d4ebb74e39a760ecb03bec1f10322fb9b123707b25ece8a272d4e8]
	I1126 20:19:20.506415  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:19:20.510207  216504 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:19:20.510281  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:19:20.545245  216504 cri.go:89] found id: "e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280"
	I1126 20:19:20.545272  216504 cri.go:89] found id: ""
	I1126 20:19:20.545283  216504 logs.go:282] 1 containers: [e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280]
	I1126 20:19:20.545333  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:19:20.549089  216504 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:19:20.549176  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:19:20.583785  216504 cri.go:89] found id: ""
	I1126 20:19:20.583811  216504 logs.go:282] 0 containers: []
	W1126 20:19:20.583820  216504 logs.go:284] No container was found matching "coredns"
	I1126 20:19:20.583828  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:19:20.583886  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:19:20.618592  216504 cri.go:89] found id: "b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915"
	I1126 20:19:20.618618  216504 cri.go:89] found id: ""
	I1126 20:19:20.618629  216504 logs.go:282] 1 containers: [b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915]
	I1126 20:19:20.618681  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:19:20.622797  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:19:20.622854  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:19:20.657465  216504 cri.go:89] found id: ""
	I1126 20:19:20.657489  216504 logs.go:282] 0 containers: []
	W1126 20:19:20.657496  216504 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:19:20.657502  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:19:20.657552  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:19:20.689631  216504 cri.go:89] found id: "be4b02e604ec5b37aa81ef10164c4129b716578a9e391854144c19256393ef00"
	I1126 20:19:20.689651  216504 cri.go:89] found id: ""
	I1126 20:19:20.689659  216504 logs.go:282] 1 containers: [be4b02e604ec5b37aa81ef10164c4129b716578a9e391854144c19256393ef00]
	I1126 20:19:20.689700  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:19:20.693083  216504 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:19:20.693132  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:19:20.726861  216504 cri.go:89] found id: ""
	I1126 20:19:20.726886  216504 logs.go:282] 0 containers: []
	W1126 20:19:20.726895  216504 logs.go:284] No container was found matching "kindnet"
	I1126 20:19:20.726903  216504 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:19:20.726962  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:19:20.760321  216504 cri.go:89] found id: ""
	I1126 20:19:20.760340  216504 logs.go:282] 0 containers: []
	W1126 20:19:20.760347  216504 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:19:20.760362  216504 logs.go:123] Gathering logs for dmesg ...
	I1126 20:19:20.760376  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:19:20.777396  216504 logs.go:123] Gathering logs for etcd [e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280] ...
	I1126 20:19:20.777425  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280"
	I1126 20:19:20.812531  216504 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:19:20.812558  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:19:20.869475  216504 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:19:20.869496  216504 logs.go:123] Gathering logs for kube-apiserver [c117c40490d4ebb74e39a760ecb03bec1f10322fb9b123707b25ece8a272d4e8] ...
	I1126 20:19:20.869507  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c117c40490d4ebb74e39a760ecb03bec1f10322fb9b123707b25ece8a272d4e8"
	I1126 20:19:20.906830  216504 logs.go:123] Gathering logs for kube-scheduler [b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915] ...
	I1126 20:19:20.906858  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915"
	I1126 20:19:20.978515  216504 logs.go:123] Gathering logs for kube-controller-manager [be4b02e604ec5b37aa81ef10164c4129b716578a9e391854144c19256393ef00] ...
	I1126 20:19:20.978540  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be4b02e604ec5b37aa81ef10164c4129b716578a9e391854144c19256393ef00"
	I1126 20:19:21.012154  216504 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:19:21.012184  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:19:21.051083  216504 logs.go:123] Gathering logs for container status ...
	I1126 20:19:21.051107  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:19:21.088008  216504 logs.go:123] Gathering logs for kubelet ...
	I1126 20:19:21.088031  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:19:22.054727  236328 cli_runner.go:164] Run: docker network inspect old-k8s-version-157431 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1126 20:19:22.071435  236328 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1126 20:19:22.075254  236328 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:19:22.085189  236328 kubeadm.go:884] updating cluster {Name:old-k8s-version-157431 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-157431 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1126 20:19:22.085328  236328 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1126 20:19:22.085404  236328 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:19:22.114299  236328 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:19:22.114317  236328 crio.go:433] Images already preloaded, skipping extraction
	I1126 20:19:22.114366  236328 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:19:22.138339  236328 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:19:22.138359  236328 cache_images.go:86] Images are preloaded, skipping loading
	I1126 20:19:22.138369  236328 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I1126 20:19:22.138453  236328 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-157431 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-157431 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1126 20:19:22.138549  236328 ssh_runner.go:195] Run: crio config
	I1126 20:19:22.180445  236328 cni.go:84] Creating CNI manager for ""
	I1126 20:19:22.180484  236328 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:19:22.180505  236328 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1126 20:19:22.180530  236328 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-157431 NodeName:old-k8s-version-157431 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1126 20:19:22.180650  236328 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-157431"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1126 20:19:22.180705  236328 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1126 20:19:22.188306  236328 binaries.go:51] Found k8s binaries, skipping transfer
	I1126 20:19:22.188355  236328 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1126 20:19:22.195639  236328 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1126 20:19:22.207520  236328 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1126 20:19:22.221466  236328 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1126 20:19:22.233096  236328 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1126 20:19:22.236383  236328 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:19:22.245759  236328 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:19:22.324905  236328 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:19:22.348317  236328 certs.go:69] Setting up /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/old-k8s-version-157431 for IP: 192.168.76.2
	I1126 20:19:22.348338  236328 certs.go:195] generating shared ca certs ...
	I1126 20:19:22.348356  236328 certs.go:227] acquiring lock for ca certs: {Name:mk4795cc985d88cb969cdd6a9c35d3c72f02dfea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:19:22.348531  236328 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21974-10722/.minikube/ca.key
	I1126 20:19:22.348596  236328 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.key
	I1126 20:19:22.348610  236328 certs.go:257] generating profile certs ...
	I1126 20:19:22.348667  236328 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/old-k8s-version-157431/client.key
	I1126 20:19:22.348679  236328 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/old-k8s-version-157431/client.crt with IP's: []
	I1126 20:19:22.405484  236328 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/old-k8s-version-157431/client.crt ...
	I1126 20:19:22.405509  236328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/old-k8s-version-157431/client.crt: {Name:mkf3eaee83806f290fcf17151ab43d057754355e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:19:22.405658  236328 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/old-k8s-version-157431/client.key ...
	I1126 20:19:22.405671  236328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/old-k8s-version-157431/client.key: {Name:mk10ff56f7d2163597e27fd810501816148a11e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:19:22.405753  236328 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/old-k8s-version-157431/apiserver.key.162086cc
	I1126 20:19:22.405774  236328 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/old-k8s-version-157431/apiserver.crt.162086cc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1126 20:19:22.596335  236328 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/old-k8s-version-157431/apiserver.crt.162086cc ...
	I1126 20:19:22.596365  236328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/old-k8s-version-157431/apiserver.crt.162086cc: {Name:mk93443fb8892a18a87416e9e328a1f5f983a1fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:19:22.596538  236328 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/old-k8s-version-157431/apiserver.key.162086cc ...
	I1126 20:19:22.596553  236328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/old-k8s-version-157431/apiserver.key.162086cc: {Name:mkb95ef80c232290eec590af1a4b935ebacd6178 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:19:22.596638  236328 certs.go:382] copying /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/old-k8s-version-157431/apiserver.crt.162086cc -> /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/old-k8s-version-157431/apiserver.crt
	I1126 20:19:22.596710  236328 certs.go:386] copying /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/old-k8s-version-157431/apiserver.key.162086cc -> /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/old-k8s-version-157431/apiserver.key
	I1126 20:19:22.596774  236328 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/old-k8s-version-157431/proxy-client.key
	I1126 20:19:22.596789  236328 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/old-k8s-version-157431/proxy-client.crt with IP's: []
	I1126 20:19:22.632425  236328 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/old-k8s-version-157431/proxy-client.crt ...
	I1126 20:19:22.632446  236328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/old-k8s-version-157431/proxy-client.crt: {Name:mkd9a6bd59d3f12931e583e7850879159eff3d7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:19:22.632585  236328 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/old-k8s-version-157431/proxy-client.key ...
	I1126 20:19:22.632600  236328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/old-k8s-version-157431/proxy-client.key: {Name:mk5d577288022a9ea4298889eafd153579b8a0fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:19:22.632786  236328 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/14258.pem (1338 bytes)
	W1126 20:19:22.632819  236328 certs.go:480] ignoring /home/jenkins/minikube-integration/21974-10722/.minikube/certs/14258_empty.pem, impossibly tiny 0 bytes
	I1126 20:19:22.632828  236328 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem (1679 bytes)
	I1126 20:19:22.632850  236328 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem (1078 bytes)
	I1126 20:19:22.632875  236328 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem (1123 bytes)
	I1126 20:19:22.632898  236328 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem (1675 bytes)
	I1126 20:19:22.632944  236328 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem (1708 bytes)
	I1126 20:19:22.633583  236328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1126 20:19:22.651895  236328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1126 20:19:22.668253  236328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1126 20:19:22.684896  236328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1126 20:19:22.701039  236328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/old-k8s-version-157431/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1126 20:19:22.717047  236328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/old-k8s-version-157431/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1126 20:19:22.733625  236328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/old-k8s-version-157431/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1126 20:19:22.750036  236328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/old-k8s-version-157431/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1126 20:19:22.766217  236328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem --> /usr/share/ca-certificates/142582.pem (1708 bytes)
	I1126 20:19:22.784084  236328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1126 20:19:22.800246  236328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/certs/14258.pem --> /usr/share/ca-certificates/14258.pem (1338 bytes)
	I1126 20:19:22.816523  236328 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1126 20:19:22.828123  236328 ssh_runner.go:195] Run: openssl version
	I1126 20:19:22.833783  236328 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142582.pem && ln -fs /usr/share/ca-certificates/142582.pem /etc/ssl/certs/142582.pem"
	I1126 20:19:22.841447  236328 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142582.pem
	I1126 20:19:22.844789  236328 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 26 19:41 /usr/share/ca-certificates/142582.pem
	I1126 20:19:22.844832  236328 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142582.pem
	I1126 20:19:22.878060  236328 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142582.pem /etc/ssl/certs/3ec20f2e.0"
	I1126 20:19:22.885845  236328 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1126 20:19:22.893592  236328 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:19:22.897012  236328 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 26 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:19:22.897060  236328 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:19:22.930732  236328 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1126 20:19:22.938645  236328 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14258.pem && ln -fs /usr/share/ca-certificates/14258.pem /etc/ssl/certs/14258.pem"
	I1126 20:19:22.946389  236328 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14258.pem
	I1126 20:19:22.949802  236328 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 26 19:41 /usr/share/ca-certificates/14258.pem
	I1126 20:19:22.949844  236328 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14258.pem
	I1126 20:19:22.983159  236328 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14258.pem /etc/ssl/certs/51391683.0"
	I1126 20:19:22.991157  236328 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1126 20:19:22.994401  236328 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1126 20:19:22.994447  236328 kubeadm.go:401] StartCluster: {Name:old-k8s-version-157431 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-157431 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:19:22.994530  236328 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 20:19:22.994568  236328 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 20:19:23.020699  236328 cri.go:89] found id: ""
	I1126 20:19:23.020738  236328 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1126 20:19:23.028131  236328 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1126 20:19:23.035286  236328 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1126 20:19:23.035322  236328 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1126 20:19:23.042357  236328 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1126 20:19:23.042375  236328 kubeadm.go:158] found existing configuration files:
	
	I1126 20:19:23.042412  236328 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1126 20:19:23.049482  236328 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1126 20:19:23.049530  236328 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1126 20:19:23.056253  236328 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1126 20:19:23.063286  236328 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1126 20:19:23.063333  236328 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1126 20:19:23.070197  236328 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1126 20:19:23.077117  236328 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1126 20:19:23.077155  236328 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1126 20:19:23.083851  236328 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1126 20:19:23.090634  236328 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1126 20:19:23.090677  236328 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1126 20:19:23.097281  236328 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1126 20:19:23.188020  236328 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1126 20:19:23.254837  236328 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1126 20:19:21.238582  211567 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1126 20:19:21.238949  211567 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1126 20:19:21.239005  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:19:21.239054  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:19:21.271764  211567 cri.go:89] found id: "6cf1c8fac8af6cb3b54d449362e57e6c614df1ac99f2ba2252204f5ea4fdffd7"
	I1126 20:19:21.271805  211567 cri.go:89] found id: ""
	I1126 20:19:21.271815  211567 logs.go:282] 1 containers: [6cf1c8fac8af6cb3b54d449362e57e6c614df1ac99f2ba2252204f5ea4fdffd7]
	I1126 20:19:21.271891  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:19:21.276192  211567 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:19:21.276258  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:19:21.303236  211567 cri.go:89] found id: ""
	I1126 20:19:21.303262  211567 logs.go:282] 0 containers: []
	W1126 20:19:21.303271  211567 logs.go:284] No container was found matching "etcd"
	I1126 20:19:21.303278  211567 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:19:21.303322  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:19:21.330766  211567 cri.go:89] found id: ""
	I1126 20:19:21.330785  211567 logs.go:282] 0 containers: []
	W1126 20:19:21.330791  211567 logs.go:284] No container was found matching "coredns"
	I1126 20:19:21.330797  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:19:21.330844  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:19:21.358212  211567 cri.go:89] found id: "b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3"
	I1126 20:19:21.358239  211567 cri.go:89] found id: ""
	I1126 20:19:21.358249  211567 logs.go:282] 1 containers: [b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3]
	I1126 20:19:21.358300  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:19:21.365072  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:19:21.365151  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:19:21.394409  211567 cri.go:89] found id: ""
	I1126 20:19:21.394436  211567 logs.go:282] 0 containers: []
	W1126 20:19:21.394446  211567 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:19:21.394453  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:19:21.394519  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:19:21.420067  211567 cri.go:89] found id: "51387f3f558f400fa925e734509a59c5b7fb4a439a0755f775b3dc32fe473dfd"
	I1126 20:19:21.420098  211567 cri.go:89] found id: ""
	I1126 20:19:21.420107  211567 logs.go:282] 1 containers: [51387f3f558f400fa925e734509a59c5b7fb4a439a0755f775b3dc32fe473dfd]
	I1126 20:19:21.420162  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:19:21.423745  211567 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:19:21.423811  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:19:21.448958  211567 cri.go:89] found id: ""
	I1126 20:19:21.448980  211567 logs.go:282] 0 containers: []
	W1126 20:19:21.448987  211567 logs.go:284] No container was found matching "kindnet"
	I1126 20:19:21.448992  211567 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:19:21.449038  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:19:21.474391  211567 cri.go:89] found id: ""
	I1126 20:19:21.474414  211567 logs.go:282] 0 containers: []
	W1126 20:19:21.474421  211567 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:19:21.474429  211567 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:19:21.474443  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:19:21.544295  211567 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:19:21.544322  211567 logs.go:123] Gathering logs for kube-apiserver [6cf1c8fac8af6cb3b54d449362e57e6c614df1ac99f2ba2252204f5ea4fdffd7] ...
	I1126 20:19:21.544340  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6cf1c8fac8af6cb3b54d449362e57e6c614df1ac99f2ba2252204f5ea4fdffd7"
	I1126 20:19:21.577341  211567 logs.go:123] Gathering logs for kube-scheduler [b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3] ...
	I1126 20:19:21.577364  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3"
	I1126 20:19:21.625744  211567 logs.go:123] Gathering logs for kube-controller-manager [51387f3f558f400fa925e734509a59c5b7fb4a439a0755f775b3dc32fe473dfd] ...
	I1126 20:19:21.625769  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 51387f3f558f400fa925e734509a59c5b7fb4a439a0755f775b3dc32fe473dfd"
	I1126 20:19:21.651608  211567 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:19:21.651631  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:19:21.697141  211567 logs.go:123] Gathering logs for container status ...
	I1126 20:19:21.697165  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:19:21.727708  211567 logs.go:123] Gathering logs for kubelet ...
	I1126 20:19:21.727727  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:19:21.802524  211567 logs.go:123] Gathering logs for dmesg ...
	I1126 20:19:21.802551  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:19:24.316517  211567 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1126 20:19:24.316876  211567 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1126 20:19:24.316932  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:19:24.316979  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:19:24.345940  211567 cri.go:89] found id: "6cf1c8fac8af6cb3b54d449362e57e6c614df1ac99f2ba2252204f5ea4fdffd7"
	I1126 20:19:24.345959  211567 cri.go:89] found id: ""
	I1126 20:19:24.345966  211567 logs.go:282] 1 containers: [6cf1c8fac8af6cb3b54d449362e57e6c614df1ac99f2ba2252204f5ea4fdffd7]
	I1126 20:19:24.346016  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:19:24.350277  211567 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:19:24.350337  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:19:24.376177  211567 cri.go:89] found id: ""
	I1126 20:19:24.376195  211567 logs.go:282] 0 containers: []
	W1126 20:19:24.376201  211567 logs.go:284] No container was found matching "etcd"
	I1126 20:19:24.376207  211567 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:19:24.376255  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:19:24.400296  211567 cri.go:89] found id: ""
	I1126 20:19:24.400317  211567 logs.go:282] 0 containers: []
	W1126 20:19:24.400326  211567 logs.go:284] No container was found matching "coredns"
	I1126 20:19:24.400332  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:19:24.400386  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:19:24.424373  211567 cri.go:89] found id: "b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3"
	I1126 20:19:24.424392  211567 cri.go:89] found id: ""
	I1126 20:19:24.424401  211567 logs.go:282] 1 containers: [b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3]
	I1126 20:19:24.424470  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:19:24.428017  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:19:24.428065  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:19:24.452948  211567 cri.go:89] found id: ""
	I1126 20:19:24.452970  211567 logs.go:282] 0 containers: []
	W1126 20:19:24.452980  211567 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:19:24.452987  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:19:24.453046  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:19:24.477345  211567 cri.go:89] found id: "51387f3f558f400fa925e734509a59c5b7fb4a439a0755f775b3dc32fe473dfd"
	I1126 20:19:24.477364  211567 cri.go:89] found id: ""
	I1126 20:19:24.477373  211567 logs.go:282] 1 containers: [51387f3f558f400fa925e734509a59c5b7fb4a439a0755f775b3dc32fe473dfd]
	I1126 20:19:24.477425  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:19:24.480956  211567 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:19:24.481007  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:19:24.505360  211567 cri.go:89] found id: ""
	I1126 20:19:24.505380  211567 logs.go:282] 0 containers: []
	W1126 20:19:24.505389  211567 logs.go:284] No container was found matching "kindnet"
	I1126 20:19:24.505396  211567 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:19:24.505445  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:19:24.532062  211567 cri.go:89] found id: ""
	I1126 20:19:24.532082  211567 logs.go:282] 0 containers: []
	W1126 20:19:24.532090  211567 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:19:24.532100  211567 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:19:24.532113  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:19:24.572847  211567 logs.go:123] Gathering logs for container status ...
	I1126 20:19:24.572867  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:19:24.603937  211567 logs.go:123] Gathering logs for kubelet ...
	I1126 20:19:24.603962  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:19:24.674834  211567 logs.go:123] Gathering logs for dmesg ...
	I1126 20:19:24.674861  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:19:24.689283  211567 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:19:24.689313  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:19:24.744248  211567 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:19:24.744271  211567 logs.go:123] Gathering logs for kube-apiserver [6cf1c8fac8af6cb3b54d449362e57e6c614df1ac99f2ba2252204f5ea4fdffd7] ...
	I1126 20:19:24.744287  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6cf1c8fac8af6cb3b54d449362e57e6c614df1ac99f2ba2252204f5ea4fdffd7"
	I1126 20:19:24.775590  211567 logs.go:123] Gathering logs for kube-scheduler [b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3] ...
	I1126 20:19:24.775612  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3"
	I1126 20:19:24.830075  211567 logs.go:123] Gathering logs for kube-controller-manager [51387f3f558f400fa925e734509a59c5b7fb4a439a0755f775b3dc32fe473dfd] ...
	I1126 20:19:24.830099  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 51387f3f558f400fa925e734509a59c5b7fb4a439a0755f775b3dc32fe473dfd"
	I1126 20:19:23.674400  216504 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1126 20:19:23.674807  216504 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1126 20:19:23.674856  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:19:23.674906  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:19:23.709422  216504 cri.go:89] found id: "c117c40490d4ebb74e39a760ecb03bec1f10322fb9b123707b25ece8a272d4e8"
	I1126 20:19:23.709447  216504 cri.go:89] found id: ""
	I1126 20:19:23.709475  216504 logs.go:282] 1 containers: [c117c40490d4ebb74e39a760ecb03bec1f10322fb9b123707b25ece8a272d4e8]
	I1126 20:19:23.709540  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:19:23.713337  216504 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:19:23.713403  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:19:23.745443  216504 cri.go:89] found id: "e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280"
	I1126 20:19:23.745481  216504 cri.go:89] found id: ""
	I1126 20:19:23.745492  216504 logs.go:282] 1 containers: [e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280]
	I1126 20:19:23.745538  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:19:23.748853  216504 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:19:23.748923  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:19:23.781400  216504 cri.go:89] found id: ""
	I1126 20:19:23.781418  216504 logs.go:282] 0 containers: []
	W1126 20:19:23.781424  216504 logs.go:284] No container was found matching "coredns"
	I1126 20:19:23.781429  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:19:23.781484  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:19:23.815742  216504 cri.go:89] found id: "b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915"
	I1126 20:19:23.815759  216504 cri.go:89] found id: ""
	I1126 20:19:23.815766  216504 logs.go:282] 1 containers: [b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915]
	I1126 20:19:23.815806  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:19:23.819297  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:19:23.819346  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:19:23.852117  216504 cri.go:89] found id: ""
	I1126 20:19:23.852139  216504 logs.go:282] 0 containers: []
	W1126 20:19:23.852148  216504 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:19:23.852155  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:19:23.852209  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:19:23.884837  216504 cri.go:89] found id: "be4b02e604ec5b37aa81ef10164c4129b716578a9e391854144c19256393ef00"
	I1126 20:19:23.884858  216504 cri.go:89] found id: ""
	I1126 20:19:23.884867  216504 logs.go:282] 1 containers: [be4b02e604ec5b37aa81ef10164c4129b716578a9e391854144c19256393ef00]
	I1126 20:19:23.884923  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:19:23.888354  216504 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:19:23.888409  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:19:23.920607  216504 cri.go:89] found id: ""
	I1126 20:19:23.920633  216504 logs.go:282] 0 containers: []
	W1126 20:19:23.920642  216504 logs.go:284] No container was found matching "kindnet"
	I1126 20:19:23.920648  216504 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:19:23.920693  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:19:23.956149  216504 cri.go:89] found id: ""
	I1126 20:19:23.956176  216504 logs.go:282] 0 containers: []
	W1126 20:19:23.956188  216504 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:19:23.956204  216504 logs.go:123] Gathering logs for kubelet ...
	I1126 20:19:23.956216  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:19:24.035697  216504 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:19:24.035724  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:19:24.096852  216504 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:19:24.096880  216504 logs.go:123] Gathering logs for etcd [e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280] ...
	I1126 20:19:24.096895  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280"
	I1126 20:19:24.130647  216504 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:19:24.130674  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:19:24.165878  216504 logs.go:123] Gathering logs for container status ...
	I1126 20:19:24.165908  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:19:24.201031  216504 logs.go:123] Gathering logs for dmesg ...
	I1126 20:19:24.201060  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:19:24.216382  216504 logs.go:123] Gathering logs for kube-apiserver [c117c40490d4ebb74e39a760ecb03bec1f10322fb9b123707b25ece8a272d4e8] ...
	I1126 20:19:24.216407  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c117c40490d4ebb74e39a760ecb03bec1f10322fb9b123707b25ece8a272d4e8"
	I1126 20:19:24.252645  216504 logs.go:123] Gathering logs for kube-scheduler [b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915] ...
	I1126 20:19:24.252669  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915"
	I1126 20:19:24.315480  216504 logs.go:123] Gathering logs for kube-controller-manager [be4b02e604ec5b37aa81ef10164c4129b716578a9e391854144c19256393ef00] ...
	I1126 20:19:24.315510  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be4b02e604ec5b37aa81ef10164c4129b716578a9e391854144c19256393ef00"
	I1126 20:19:26.853273  216504 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1126 20:19:26.853799  216504 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1126 20:19:26.853869  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:19:26.853922  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:19:26.899520  216504 cri.go:89] found id: "c117c40490d4ebb74e39a760ecb03bec1f10322fb9b123707b25ece8a272d4e8"
	I1126 20:19:26.899547  216504 cri.go:89] found id: ""
	I1126 20:19:26.899557  216504 logs.go:282] 1 containers: [c117c40490d4ebb74e39a760ecb03bec1f10322fb9b123707b25ece8a272d4e8]
	I1126 20:19:26.899611  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:19:26.903725  216504 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:19:26.903774  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:19:26.947719  216504 cri.go:89] found id: "e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280"
	I1126 20:19:26.947741  216504 cri.go:89] found id: ""
	I1126 20:19:26.947750  216504 logs.go:282] 1 containers: [e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280]
	I1126 20:19:26.947804  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:19:26.951800  216504 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:19:26.951862  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:19:26.986998  216504 cri.go:89] found id: ""
	I1126 20:19:26.987019  216504 logs.go:282] 0 containers: []
	W1126 20:19:26.987027  216504 logs.go:284] No container was found matching "coredns"
	I1126 20:19:26.987034  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:19:26.987081  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:19:27.024853  216504 cri.go:89] found id: "b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915"
	I1126 20:19:27.024874  216504 cri.go:89] found id: ""
	I1126 20:19:27.024884  216504 logs.go:282] 1 containers: [b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915]
	I1126 20:19:27.024932  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:19:27.028531  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:19:27.028590  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:19:27.062768  216504 cri.go:89] found id: ""
	I1126 20:19:27.062790  216504 logs.go:282] 0 containers: []
	W1126 20:19:27.062799  216504 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:19:27.062806  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:19:27.062858  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:19:27.099655  216504 cri.go:89] found id: "be4b02e604ec5b37aa81ef10164c4129b716578a9e391854144c19256393ef00"
	I1126 20:19:27.099677  216504 cri.go:89] found id: ""
	I1126 20:19:27.099685  216504 logs.go:282] 1 containers: [be4b02e604ec5b37aa81ef10164c4129b716578a9e391854144c19256393ef00]
	I1126 20:19:27.099738  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:19:27.103319  216504 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:19:27.103377  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:19:27.144874  216504 cri.go:89] found id: ""
	I1126 20:19:27.144895  216504 logs.go:282] 0 containers: []
	W1126 20:19:27.144904  216504 logs.go:284] No container was found matching "kindnet"
	I1126 20:19:27.144911  216504 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:19:27.144958  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:19:27.183789  216504 cri.go:89] found id: ""
	I1126 20:19:27.183813  216504 logs.go:282] 0 containers: []
	W1126 20:19:27.183823  216504 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:19:27.183840  216504 logs.go:123] Gathering logs for kubelet ...
	I1126 20:19:27.183852  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:19:27.273109  216504 logs.go:123] Gathering logs for dmesg ...
	I1126 20:19:27.273139  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:19:27.288591  216504 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:19:27.288613  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:19:27.364696  216504 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:19:27.364716  216504 logs.go:123] Gathering logs for kube-apiserver [c117c40490d4ebb74e39a760ecb03bec1f10322fb9b123707b25ece8a272d4e8] ...
	I1126 20:19:27.364730  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c117c40490d4ebb74e39a760ecb03bec1f10322fb9b123707b25ece8a272d4e8"
	I1126 20:19:27.409172  216504 logs.go:123] Gathering logs for kube-scheduler [b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915] ...
	I1126 20:19:27.409202  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915"
	I1126 20:19:27.491518  216504 logs.go:123] Gathering logs for container status ...
	I1126 20:19:27.491549  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:19:27.535733  216504 logs.go:123] Gathering logs for etcd [e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280] ...
	I1126 20:19:27.535763  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280"
	I1126 20:19:27.579109  216504 logs.go:123] Gathering logs for kube-controller-manager [be4b02e604ec5b37aa81ef10164c4129b716578a9e391854144c19256393ef00] ...
	I1126 20:19:27.579144  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be4b02e604ec5b37aa81ef10164c4129b716578a9e391854144c19256393ef00"
	I1126 20:19:27.624415  216504 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:19:27.624447  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:19:27.354534  211567 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1126 20:19:27.354971  211567 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1126 20:19:27.355029  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:19:27.355084  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:19:27.386921  211567 cri.go:89] found id: "6cf1c8fac8af6cb3b54d449362e57e6c614df1ac99f2ba2252204f5ea4fdffd7"
	I1126 20:19:27.386946  211567 cri.go:89] found id: ""
	I1126 20:19:27.386956  211567 logs.go:282] 1 containers: [6cf1c8fac8af6cb3b54d449362e57e6c614df1ac99f2ba2252204f5ea4fdffd7]
	I1126 20:19:27.387015  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:19:27.390934  211567 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:19:27.390995  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:19:27.423470  211567 cri.go:89] found id: ""
	I1126 20:19:27.423493  211567 logs.go:282] 0 containers: []
	W1126 20:19:27.423506  211567 logs.go:284] No container was found matching "etcd"
	I1126 20:19:27.423514  211567 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:19:27.423564  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:19:27.452620  211567 cri.go:89] found id: ""
	I1126 20:19:27.452645  211567 logs.go:282] 0 containers: []
	W1126 20:19:27.452655  211567 logs.go:284] No container was found matching "coredns"
	I1126 20:19:27.452662  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:19:27.452720  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:19:27.485792  211567 cri.go:89] found id: "b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3"
	I1126 20:19:27.485813  211567 cri.go:89] found id: ""
	I1126 20:19:27.485822  211567 logs.go:282] 1 containers: [b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3]
	I1126 20:19:27.485876  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:19:27.490717  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:19:27.490789  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:19:27.521682  211567 cri.go:89] found id: ""
	I1126 20:19:27.521709  211567 logs.go:282] 0 containers: []
	W1126 20:19:27.521722  211567 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:19:27.521730  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:19:27.521790  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:19:27.555917  211567 cri.go:89] found id: "51387f3f558f400fa925e734509a59c5b7fb4a439a0755f775b3dc32fe473dfd"
	I1126 20:19:27.555938  211567 cri.go:89] found id: ""
	I1126 20:19:27.555946  211567 logs.go:282] 1 containers: [51387f3f558f400fa925e734509a59c5b7fb4a439a0755f775b3dc32fe473dfd]
	I1126 20:19:27.556001  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:19:27.560354  211567 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:19:27.560421  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:19:27.592625  211567 cri.go:89] found id: ""
	I1126 20:19:27.592651  211567 logs.go:282] 0 containers: []
	W1126 20:19:27.592662  211567 logs.go:284] No container was found matching "kindnet"
	I1126 20:19:27.592669  211567 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:19:27.592722  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:19:27.624993  211567 cri.go:89] found id: ""
	I1126 20:19:27.625018  211567 logs.go:282] 0 containers: []
	W1126 20:19:27.625028  211567 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:19:27.625045  211567 logs.go:123] Gathering logs for kube-apiserver [6cf1c8fac8af6cb3b54d449362e57e6c614df1ac99f2ba2252204f5ea4fdffd7] ...
	I1126 20:19:27.625118  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6cf1c8fac8af6cb3b54d449362e57e6c614df1ac99f2ba2252204f5ea4fdffd7"
	I1126 20:19:27.665767  211567 logs.go:123] Gathering logs for kube-scheduler [b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3] ...
	I1126 20:19:27.665796  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3"
	I1126 20:19:27.722389  211567 logs.go:123] Gathering logs for kube-controller-manager [51387f3f558f400fa925e734509a59c5b7fb4a439a0755f775b3dc32fe473dfd] ...
	I1126 20:19:27.722414  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 51387f3f558f400fa925e734509a59c5b7fb4a439a0755f775b3dc32fe473dfd"
	I1126 20:19:27.750942  211567 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:19:27.750970  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:19:27.807911  211567 logs.go:123] Gathering logs for container status ...
	I1126 20:19:27.807944  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:19:27.843049  211567 logs.go:123] Gathering logs for kubelet ...
	I1126 20:19:27.843075  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:19:27.929569  211567 logs.go:123] Gathering logs for dmesg ...
	I1126 20:19:27.929600  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:19:27.945492  211567 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:19:27.945522  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:19:28.004198  211567 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:19:30.504442  211567 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1126 20:19:30.505020  211567 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1126 20:19:30.505077  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:19:30.505135  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:19:30.535966  211567 cri.go:89] found id: "6cf1c8fac8af6cb3b54d449362e57e6c614df1ac99f2ba2252204f5ea4fdffd7"
	I1126 20:19:30.535995  211567 cri.go:89] found id: ""
	I1126 20:19:30.536006  211567 logs.go:282] 1 containers: [6cf1c8fac8af6cb3b54d449362e57e6c614df1ac99f2ba2252204f5ea4fdffd7]
	I1126 20:19:30.536058  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:19:30.540059  211567 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:19:30.540118  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:19:30.567096  211567 cri.go:89] found id: ""
	I1126 20:19:30.567125  211567 logs.go:282] 0 containers: []
	W1126 20:19:30.567134  211567 logs.go:284] No container was found matching "etcd"
	I1126 20:19:30.567141  211567 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:19:30.567199  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:19:30.595932  211567 cri.go:89] found id: ""
	I1126 20:19:30.595956  211567 logs.go:282] 0 containers: []
	W1126 20:19:30.595965  211567 logs.go:284] No container was found matching "coredns"
	I1126 20:19:30.595978  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:19:30.596031  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:19:30.622971  211567 cri.go:89] found id: "b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3"
	I1126 20:19:30.622987  211567 cri.go:89] found id: ""
	I1126 20:19:30.622995  211567 logs.go:282] 1 containers: [b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3]
	I1126 20:19:30.623048  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:19:30.626930  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:19:30.626985  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:19:30.652241  211567 cri.go:89] found id: ""
	I1126 20:19:30.652265  211567 logs.go:282] 0 containers: []
	W1126 20:19:30.652274  211567 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:19:30.652281  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:19:30.652339  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:19:30.679614  211567 cri.go:89] found id: "51387f3f558f400fa925e734509a59c5b7fb4a439a0755f775b3dc32fe473dfd"
	I1126 20:19:30.679635  211567 cri.go:89] found id: ""
	I1126 20:19:30.679645  211567 logs.go:282] 1 containers: [51387f3f558f400fa925e734509a59c5b7fb4a439a0755f775b3dc32fe473dfd]
	I1126 20:19:30.679712  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:19:30.683561  211567 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:19:30.683620  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:19:30.712569  211567 cri.go:89] found id: ""
	I1126 20:19:30.712594  211567 logs.go:282] 0 containers: []
	W1126 20:19:30.712604  211567 logs.go:284] No container was found matching "kindnet"
	I1126 20:19:30.712611  211567 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:19:30.712666  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:19:31.329939  236328 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1126 20:19:31.330031  236328 kubeadm.go:319] [preflight] Running pre-flight checks
	I1126 20:19:31.330188  236328 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1126 20:19:31.330285  236328 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1126 20:19:31.330347  236328 kubeadm.go:319] OS: Linux
	I1126 20:19:31.330405  236328 kubeadm.go:319] CGROUPS_CPU: enabled
	I1126 20:19:31.330498  236328 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1126 20:19:31.330566  236328 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1126 20:19:31.330630  236328 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1126 20:19:31.330700  236328 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1126 20:19:31.330768  236328 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1126 20:19:31.330836  236328 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1126 20:19:31.330912  236328 kubeadm.go:319] CGROUPS_IO: enabled
	I1126 20:19:31.331024  236328 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1126 20:19:31.331160  236328 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1126 20:19:31.331299  236328 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1126 20:19:31.331390  236328 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1126 20:19:31.333277  236328 out.go:252]   - Generating certificates and keys ...
	I1126 20:19:31.333366  236328 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1126 20:19:31.333453  236328 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1126 20:19:31.333570  236328 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1126 20:19:31.333656  236328 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1126 20:19:31.333751  236328 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1126 20:19:31.333833  236328 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1126 20:19:31.333916  236328 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1126 20:19:31.334114  236328 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-157431] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1126 20:19:31.334199  236328 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1126 20:19:31.334375  236328 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-157431] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1126 20:19:31.334482  236328 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1126 20:19:31.334575  236328 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1126 20:19:31.334639  236328 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1126 20:19:31.334721  236328 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1126 20:19:31.334806  236328 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1126 20:19:31.334885  236328 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1126 20:19:31.334983  236328 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1126 20:19:31.335073  236328 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1126 20:19:31.335190  236328 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1126 20:19:31.335281  236328 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1126 20:19:31.336362  236328 out.go:252]   - Booting up control plane ...
	I1126 20:19:31.336449  236328 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1126 20:19:31.336576  236328 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1126 20:19:31.336658  236328 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1126 20:19:31.336747  236328 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1126 20:19:31.336820  236328 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1126 20:19:31.336855  236328 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1126 20:19:31.337025  236328 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1126 20:19:31.337119  236328 kubeadm.go:319] [apiclient] All control plane components are healthy after 4.001808 seconds
	I1126 20:19:31.337254  236328 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1126 20:19:31.337447  236328 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1126 20:19:31.337551  236328 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1126 20:19:31.337780  236328 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-157431 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1126 20:19:31.337837  236328 kubeadm.go:319] [bootstrap-token] Using token: ifhmlj.38xz6luczf2bna95
	I1126 20:19:31.339524  236328 out.go:252]   - Configuring RBAC rules ...
	I1126 20:19:31.339657  236328 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1126 20:19:31.339775  236328 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1126 20:19:31.339940  236328 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1126 20:19:31.340081  236328 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1126 20:19:31.340202  236328 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1126 20:19:31.340333  236328 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1126 20:19:31.340521  236328 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1126 20:19:31.340589  236328 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1126 20:19:31.340667  236328 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1126 20:19:31.340682  236328 kubeadm.go:319] 
	I1126 20:19:31.340772  236328 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1126 20:19:31.340785  236328 kubeadm.go:319] 
	I1126 20:19:31.340933  236328 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1126 20:19:31.340952  236328 kubeadm.go:319] 
	I1126 20:19:31.340991  236328 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1126 20:19:31.341072  236328 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1126 20:19:31.341146  236328 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1126 20:19:31.341155  236328 kubeadm.go:319] 
	I1126 20:19:31.341215  236328 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1126 20:19:31.341224  236328 kubeadm.go:319] 
	I1126 20:19:31.341274  236328 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1126 20:19:31.341288  236328 kubeadm.go:319] 
	I1126 20:19:31.341356  236328 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1126 20:19:31.341430  236328 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1126 20:19:31.341549  236328 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1126 20:19:31.341555  236328 kubeadm.go:319] 
	I1126 20:19:31.341624  236328 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1126 20:19:31.341689  236328 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1126 20:19:31.341694  236328 kubeadm.go:319] 
	I1126 20:19:31.341801  236328 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token ifhmlj.38xz6luczf2bna95 \
	I1126 20:19:31.341953  236328 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:9cd90c79a54686c211eb288f3e905de978807b759c474caa965189d211d6551b \
	I1126 20:19:31.341978  236328 kubeadm.go:319] 	--control-plane 
	I1126 20:19:31.341982  236328 kubeadm.go:319] 
	I1126 20:19:31.342110  236328 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1126 20:19:31.342121  236328 kubeadm.go:319] 
	I1126 20:19:31.342211  236328 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token ifhmlj.38xz6luczf2bna95 \
	I1126 20:19:31.342368  236328 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:9cd90c79a54686c211eb288f3e905de978807b759c474caa965189d211d6551b 
	I1126 20:19:31.342385  236328 cni.go:84] Creating CNI manager for ""
	I1126 20:19:31.342396  236328 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:19:31.344424  236328 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1126 20:19:30.181082  216504 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1126 20:19:30.181486  216504 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1126 20:19:30.181532  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:19:30.181583  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:19:30.214569  216504 cri.go:89] found id: "c117c40490d4ebb74e39a760ecb03bec1f10322fb9b123707b25ece8a272d4e8"
	I1126 20:19:30.214587  216504 cri.go:89] found id: ""
	I1126 20:19:30.214594  216504 logs.go:282] 1 containers: [c117c40490d4ebb74e39a760ecb03bec1f10322fb9b123707b25ece8a272d4e8]
	I1126 20:19:30.214635  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:19:30.218156  216504 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:19:30.218211  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:19:30.249901  216504 cri.go:89] found id: "e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280"
	I1126 20:19:30.249921  216504 cri.go:89] found id: ""
	I1126 20:19:30.249930  216504 logs.go:282] 1 containers: [e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280]
	I1126 20:19:30.249975  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:19:30.253652  216504 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:19:30.253716  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:19:30.286582  216504 cri.go:89] found id: ""
	I1126 20:19:30.286605  216504 logs.go:282] 0 containers: []
	W1126 20:19:30.286615  216504 logs.go:284] No container was found matching "coredns"
	I1126 20:19:30.286623  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:19:30.286680  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:19:30.319581  216504 cri.go:89] found id: "b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915"
	I1126 20:19:30.319604  216504 cri.go:89] found id: ""
	I1126 20:19:30.319614  216504 logs.go:282] 1 containers: [b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915]
	I1126 20:19:30.319668  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:19:30.323159  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:19:30.323218  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:19:30.356273  216504 cri.go:89] found id: ""
	I1126 20:19:30.356301  216504 logs.go:282] 0 containers: []
	W1126 20:19:30.356310  216504 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:19:30.356317  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:19:30.356374  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:19:30.388522  216504 cri.go:89] found id: "be4b02e604ec5b37aa81ef10164c4129b716578a9e391854144c19256393ef00"
	I1126 20:19:30.388540  216504 cri.go:89] found id: ""
	I1126 20:19:30.388547  216504 logs.go:282] 1 containers: [be4b02e604ec5b37aa81ef10164c4129b716578a9e391854144c19256393ef00]
	I1126 20:19:30.388600  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:19:30.392176  216504 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:19:30.392237  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:19:30.425113  216504 cri.go:89] found id: ""
	I1126 20:19:30.425135  216504 logs.go:282] 0 containers: []
	W1126 20:19:30.425142  216504 logs.go:284] No container was found matching "kindnet"
	I1126 20:19:30.425147  216504 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:19:30.425188  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:19:30.462531  216504 cri.go:89] found id: ""
	I1126 20:19:30.462553  216504 logs.go:282] 0 containers: []
	W1126 20:19:30.462564  216504 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:19:30.462585  216504 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:19:30.462603  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:19:30.533568  216504 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:19:30.533588  216504 logs.go:123] Gathering logs for etcd [e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280] ...
	I1126 20:19:30.533604  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280"
	I1126 20:19:30.570136  216504 logs.go:123] Gathering logs for kube-scheduler [b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915] ...
	I1126 20:19:30.570162  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915"
	I1126 20:19:30.638724  216504 logs.go:123] Gathering logs for kube-controller-manager [be4b02e604ec5b37aa81ef10164c4129b716578a9e391854144c19256393ef00] ...
	I1126 20:19:30.638749  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be4b02e604ec5b37aa81ef10164c4129b716578a9e391854144c19256393ef00"
	I1126 20:19:30.674789  216504 logs.go:123] Gathering logs for dmesg ...
	I1126 20:19:30.674820  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:19:30.691233  216504 logs.go:123] Gathering logs for kube-apiserver [c117c40490d4ebb74e39a760ecb03bec1f10322fb9b123707b25ece8a272d4e8] ...
	I1126 20:19:30.691257  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c117c40490d4ebb74e39a760ecb03bec1f10322fb9b123707b25ece8a272d4e8"
	I1126 20:19:30.734749  216504 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:19:30.734781  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:19:30.781695  216504 logs.go:123] Gathering logs for container status ...
	I1126 20:19:30.781724  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:19:30.822262  216504 logs.go:123] Gathering logs for kubelet ...
	I1126 20:19:30.822293  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:19:33.451167  216504 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1126 20:19:31.345438  236328 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1126 20:19:31.349472  236328 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1126 20:19:31.349489  236328 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1126 20:19:31.362105  236328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1126 20:19:31.995293  236328 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1126 20:19:31.995350  236328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:19:31.995379  236328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-157431 minikube.k8s.io/updated_at=2025_11_26T20_19_31_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970 minikube.k8s.io/name=old-k8s-version-157431 minikube.k8s.io/primary=true
	I1126 20:19:32.005053  236328 ops.go:34] apiserver oom_adj: -16
	I1126 20:19:32.060988  236328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:19:32.561693  236328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:19:33.061842  236328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:19:33.561647  236328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:19:34.061586  236328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:19:34.561834  236328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:19:35.061822  236328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:19:30.741884  211567 cri.go:89] found id: ""
	I1126 20:19:30.741912  211567 logs.go:282] 0 containers: []
	W1126 20:19:30.741922  211567 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:19:30.741933  211567 logs.go:123] Gathering logs for kube-controller-manager [51387f3f558f400fa925e734509a59c5b7fb4a439a0755f775b3dc32fe473dfd] ...
	I1126 20:19:30.741947  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 51387f3f558f400fa925e734509a59c5b7fb4a439a0755f775b3dc32fe473dfd"
	I1126 20:19:30.771294  211567 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:19:30.771323  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:19:30.825860  211567 logs.go:123] Gathering logs for container status ...
	I1126 20:19:30.825885  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:19:30.861445  211567 logs.go:123] Gathering logs for kubelet ...
	I1126 20:19:30.861482  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:19:30.949044  211567 logs.go:123] Gathering logs for dmesg ...
	I1126 20:19:30.949076  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:19:30.964529  211567 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:19:30.964555  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1126 20:19:38.451806  216504 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1126 20:19:38.451857  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:19:38.451906  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:19:35.561633  236328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:19:36.061020  236328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:19:36.561165  236328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:19:37.061070  236328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:19:37.561993  236328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:19:38.061217  236328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:19:38.561662  236328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:19:39.061081  236328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:19:39.561033  236328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:19:40.061062  236328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:19:38.486670  216504 cri.go:89] found id: "904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823"
	I1126 20:19:38.486695  216504 cri.go:89] found id: "c117c40490d4ebb74e39a760ecb03bec1f10322fb9b123707b25ece8a272d4e8"
	I1126 20:19:38.486702  216504 cri.go:89] found id: ""
	I1126 20:19:38.486711  216504 logs.go:282] 2 containers: [904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823 c117c40490d4ebb74e39a760ecb03bec1f10322fb9b123707b25ece8a272d4e8]
	I1126 20:19:38.486765  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:19:38.490415  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:19:38.494017  216504 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:19:38.494085  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:19:38.526831  216504 cri.go:89] found id: "e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280"
	I1126 20:19:38.526847  216504 cri.go:89] found id: ""
	I1126 20:19:38.526856  216504 logs.go:282] 1 containers: [e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280]
	I1126 20:19:38.526903  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:19:38.530179  216504 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:19:38.530229  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:19:38.562270  216504 cri.go:89] found id: ""
	I1126 20:19:38.562290  216504 logs.go:282] 0 containers: []
	W1126 20:19:38.562299  216504 logs.go:284] No container was found matching "coredns"
	I1126 20:19:38.562306  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:19:38.562352  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:19:38.596782  216504 cri.go:89] found id: "b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915"
	I1126 20:19:38.596805  216504 cri.go:89] found id: ""
	I1126 20:19:38.596814  216504 logs.go:282] 1 containers: [b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915]
	I1126 20:19:38.596862  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:19:38.600317  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:19:38.600380  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:19:38.635732  216504 cri.go:89] found id: ""
	I1126 20:19:38.635759  216504 logs.go:282] 0 containers: []
	W1126 20:19:38.635766  216504 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:19:38.635772  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:19:38.635819  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:19:38.669217  216504 cri.go:89] found id: "be4b02e604ec5b37aa81ef10164c4129b716578a9e391854144c19256393ef00"
	I1126 20:19:38.669242  216504 cri.go:89] found id: ""
	I1126 20:19:38.669251  216504 logs.go:282] 1 containers: [be4b02e604ec5b37aa81ef10164c4129b716578a9e391854144c19256393ef00]
	I1126 20:19:38.669308  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:19:38.672835  216504 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:19:38.672894  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:19:38.707384  216504 cri.go:89] found id: ""
	I1126 20:19:38.707404  216504 logs.go:282] 0 containers: []
	W1126 20:19:38.707410  216504 logs.go:284] No container was found matching "kindnet"
	I1126 20:19:38.707415  216504 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:19:38.707473  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:19:38.742087  216504 cri.go:89] found id: ""
	I1126 20:19:38.742110  216504 logs.go:282] 0 containers: []
	W1126 20:19:38.742117  216504 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:19:38.742130  216504 logs.go:123] Gathering logs for kube-apiserver [904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823] ...
	I1126 20:19:38.742143  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823"
	I1126 20:19:38.776288  216504 logs.go:123] Gathering logs for kube-apiserver [c117c40490d4ebb74e39a760ecb03bec1f10322fb9b123707b25ece8a272d4e8] ...
	I1126 20:19:38.776312  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c117c40490d4ebb74e39a760ecb03bec1f10322fb9b123707b25ece8a272d4e8"
	I1126 20:19:38.812534  216504 logs.go:123] Gathering logs for etcd [e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280] ...
	I1126 20:19:38.812557  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280"
	I1126 20:19:38.844232  216504 logs.go:123] Gathering logs for kube-controller-manager [be4b02e604ec5b37aa81ef10164c4129b716578a9e391854144c19256393ef00] ...
	I1126 20:19:38.844259  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be4b02e604ec5b37aa81ef10164c4129b716578a9e391854144c19256393ef00"
	I1126 20:19:38.877867  216504 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:19:38.877898  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:19:38.917070  216504 logs.go:123] Gathering logs for kubelet ...
	I1126 20:19:38.917098  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:19:38.996724  216504 logs.go:123] Gathering logs for dmesg ...
	I1126 20:19:38.996748  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:19:39.011803  216504 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:19:39.011829  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1126 20:19:40.561798  236328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:19:41.061584  236328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:19:41.561409  236328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:19:42.061646  236328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:19:42.562089  236328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:19:43.061328  236328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:19:43.561452  236328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:19:44.061036  236328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:19:44.561378  236328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:19:44.626253  236328 kubeadm.go:1114] duration metric: took 12.630949111s to wait for elevateKubeSystemPrivileges
	I1126 20:19:44.626290  236328 kubeadm.go:403] duration metric: took 21.631846766s to StartCluster
	I1126 20:19:44.626311  236328 settings.go:142] acquiring lock: {Name:mkeab3f1dbcfb88a16fda32bff13c520a9a811ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:19:44.626385  236328 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21974-10722/kubeconfig
	I1126 20:19:44.627885  236328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/kubeconfig: {Name:mk6fc897a3190afd853d8f239c80394df10dbd39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:19:44.628133  236328 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1126 20:19:44.628171  236328 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1126 20:19:44.628281  236328 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1126 20:19:44.628376  236328 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-157431"
	I1126 20:19:44.628393  236328 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-157431"
	I1126 20:19:44.628392  236328 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-157431"
	I1126 20:19:44.628402  236328 config.go:182] Loaded profile config "old-k8s-version-157431": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1126 20:19:44.628411  236328 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-157431"
	I1126 20:19:44.628426  236328 host.go:66] Checking if "old-k8s-version-157431" exists ...
	I1126 20:19:44.628788  236328 cli_runner.go:164] Run: docker container inspect old-k8s-version-157431 --format={{.State.Status}}
	I1126 20:19:44.628919  236328 cli_runner.go:164] Run: docker container inspect old-k8s-version-157431 --format={{.State.Status}}
	I1126 20:19:44.632602  236328 out.go:179] * Verifying Kubernetes components...
	I1126 20:19:44.633886  236328 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:19:44.653968  236328 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-157431"
	I1126 20:19:44.654014  236328 host.go:66] Checking if "old-k8s-version-157431" exists ...
	I1126 20:19:44.654382  236328 cli_runner.go:164] Run: docker container inspect old-k8s-version-157431 --format={{.State.Status}}
	I1126 20:19:44.655055  236328 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1126 20:19:44.657182  236328 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:19:44.657203  236328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1126 20:19:44.657253  236328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-157431
	I1126 20:19:44.681891  236328 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1126 20:19:44.681923  236328 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1126 20:19:44.681989  236328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-157431
	I1126 20:19:44.684967  236328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/old-k8s-version-157431/id_rsa Username:docker}
	I1126 20:19:44.707346  236328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/old-k8s-version-157431/id_rsa Username:docker}
	I1126 20:19:44.727985  236328 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1126 20:19:44.794090  236328 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:19:44.805228  236328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:19:44.817683  236328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1126 20:19:44.951997  236328 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1126 20:19:44.953214  236328 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-157431" to be "Ready" ...
	I1126 20:19:45.144305  236328 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1126 20:19:45.145272  236328 addons.go:530] duration metric: took 516.998319ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1126 20:19:41.030987  211567 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.066415083s)
	W1126 20:19:41.031017  211567 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1126 20:19:41.031024  211567 logs.go:123] Gathering logs for kube-apiserver [6cf1c8fac8af6cb3b54d449362e57e6c614df1ac99f2ba2252204f5ea4fdffd7] ...
	I1126 20:19:41.031034  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6cf1c8fac8af6cb3b54d449362e57e6c614df1ac99f2ba2252204f5ea4fdffd7"
	I1126 20:19:41.060666  211567 logs.go:123] Gathering logs for kube-scheduler [b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3] ...
	I1126 20:19:41.060691  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3"
	I1126 20:19:43.608917  211567 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1126 20:19:45.456673  236328 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-157431" context rescaled to 1 replicas
	W1126 20:19:46.956759  236328 node_ready.go:57] node "old-k8s-version-157431" has "Ready":"False" status (will retry)
	W1126 20:19:49.456118  236328 node_ready.go:57] node "old-k8s-version-157431" has "Ready":"False" status (will retry)
	I1126 20:19:48.611494  211567 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1126 20:19:48.611547  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:19:48.611594  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:19:48.637315  211567 cri.go:89] found id: "b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf"
	I1126 20:19:48.637335  211567 cri.go:89] found id: "6cf1c8fac8af6cb3b54d449362e57e6c614df1ac99f2ba2252204f5ea4fdffd7"
	I1126 20:19:48.637339  211567 cri.go:89] found id: ""
	I1126 20:19:48.637347  211567 logs.go:282] 2 containers: [b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf 6cf1c8fac8af6cb3b54d449362e57e6c614df1ac99f2ba2252204f5ea4fdffd7]
	I1126 20:19:48.637388  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:19:48.641375  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:19:48.644915  211567 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:19:48.644974  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:19:48.669823  211567 cri.go:89] found id: ""
	I1126 20:19:48.669843  211567 logs.go:282] 0 containers: []
	W1126 20:19:48.669853  211567 logs.go:284] No container was found matching "etcd"
	I1126 20:19:48.669861  211567 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:19:48.669904  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:19:48.694688  211567 cri.go:89] found id: ""
	I1126 20:19:48.694711  211567 logs.go:282] 0 containers: []
	W1126 20:19:48.694720  211567 logs.go:284] No container was found matching "coredns"
	I1126 20:19:48.694727  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:19:48.694771  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:19:48.720523  211567 cri.go:89] found id: "b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3"
	I1126 20:19:48.720544  211567 cri.go:89] found id: ""
	I1126 20:19:48.720552  211567 logs.go:282] 1 containers: [b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3]
	I1126 20:19:48.720604  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:19:48.724119  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:19:48.724178  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:19:48.748548  211567 cri.go:89] found id: ""
	I1126 20:19:48.748569  211567 logs.go:282] 0 containers: []
	W1126 20:19:48.748576  211567 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:19:48.748583  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:19:48.748625  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:19:48.773002  211567 cri.go:89] found id: "51387f3f558f400fa925e734509a59c5b7fb4a439a0755f775b3dc32fe473dfd"
	I1126 20:19:48.773020  211567 cri.go:89] found id: ""
	I1126 20:19:48.773029  211567 logs.go:282] 1 containers: [51387f3f558f400fa925e734509a59c5b7fb4a439a0755f775b3dc32fe473dfd]
	I1126 20:19:48.773074  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:19:48.776598  211567 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:19:48.776650  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:19:48.800997  211567 cri.go:89] found id: ""
	I1126 20:19:48.801017  211567 logs.go:282] 0 containers: []
	W1126 20:19:48.801023  211567 logs.go:284] No container was found matching "kindnet"
	I1126 20:19:48.801028  211567 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:19:48.801091  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:19:48.825434  211567 cri.go:89] found id: ""
	I1126 20:19:48.825469  211567 logs.go:282] 0 containers: []
	W1126 20:19:48.825479  211567 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:19:48.825496  211567 logs.go:123] Gathering logs for container status ...
	I1126 20:19:48.825508  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:19:48.853871  211567 logs.go:123] Gathering logs for kubelet ...
	I1126 20:19:48.853898  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:19:48.928673  211567 logs.go:123] Gathering logs for kube-apiserver [6cf1c8fac8af6cb3b54d449362e57e6c614df1ac99f2ba2252204f5ea4fdffd7] ...
	I1126 20:19:48.928699  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6cf1c8fac8af6cb3b54d449362e57e6c614df1ac99f2ba2252204f5ea4fdffd7"
	I1126 20:19:48.959954  211567 logs.go:123] Gathering logs for kube-controller-manager [51387f3f558f400fa925e734509a59c5b7fb4a439a0755f775b3dc32fe473dfd] ...
	I1126 20:19:48.959980  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 51387f3f558f400fa925e734509a59c5b7fb4a439a0755f775b3dc32fe473dfd"
	I1126 20:19:48.985619  211567 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:19:48.985640  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:19:49.030130  211567 logs.go:123] Gathering logs for dmesg ...
	I1126 20:19:49.030154  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:19:49.043605  211567 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:19:49.043631  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1126 20:19:49.070101  216504 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.058251327s)
	W1126 20:19:49.070147  216504 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1126 20:19:49.070155  216504 logs.go:123] Gathering logs for kube-scheduler [b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915] ...
	I1126 20:19:49.070174  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915"
	I1126 20:19:49.133323  216504 logs.go:123] Gathering logs for container status ...
	I1126 20:19:49.133350  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:19:51.670585  216504 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1126 20:19:51.670953  216504 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1126 20:19:51.670998  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:19:51.671043  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:19:51.705522  216504 cri.go:89] found id: "904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823"
	I1126 20:19:51.705540  216504 cri.go:89] found id: "c117c40490d4ebb74e39a760ecb03bec1f10322fb9b123707b25ece8a272d4e8"
	I1126 20:19:51.705545  216504 cri.go:89] found id: ""
	I1126 20:19:51.705553  216504 logs.go:282] 2 containers: [904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823 c117c40490d4ebb74e39a760ecb03bec1f10322fb9b123707b25ece8a272d4e8]
	I1126 20:19:51.705606  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:19:51.709515  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:19:51.712976  216504 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:19:51.713023  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:19:51.745235  216504 cri.go:89] found id: "e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280"
	I1126 20:19:51.745256  216504 cri.go:89] found id: ""
	I1126 20:19:51.745266  216504 logs.go:282] 1 containers: [e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280]
	I1126 20:19:51.745320  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:19:51.748654  216504 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:19:51.748721  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:19:51.780263  216504 cri.go:89] found id: ""
	I1126 20:19:51.780286  216504 logs.go:282] 0 containers: []
	W1126 20:19:51.780294  216504 logs.go:284] No container was found matching "coredns"
	I1126 20:19:51.780300  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:19:51.780351  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:19:51.812903  216504 cri.go:89] found id: "b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915"
	I1126 20:19:51.812923  216504 cri.go:89] found id: ""
	I1126 20:19:51.812932  216504 logs.go:282] 1 containers: [b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915]
	I1126 20:19:51.812985  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:19:51.816369  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:19:51.816419  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:19:51.849278  216504 cri.go:89] found id: ""
	I1126 20:19:51.849296  216504 logs.go:282] 0 containers: []
	W1126 20:19:51.849303  216504 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:19:51.849311  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:19:51.849367  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:19:51.881399  216504 cri.go:89] found id: "583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e"
	I1126 20:19:51.881420  216504 cri.go:89] found id: "be4b02e604ec5b37aa81ef10164c4129b716578a9e391854144c19256393ef00"
	I1126 20:19:51.881424  216504 cri.go:89] found id: ""
	I1126 20:19:51.881430  216504 logs.go:282] 2 containers: [583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e be4b02e604ec5b37aa81ef10164c4129b716578a9e391854144c19256393ef00]
	I1126 20:19:51.881498  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:19:51.884897  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:19:51.888660  216504 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:19:51.888718  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:19:51.920963  216504 cri.go:89] found id: ""
	I1126 20:19:51.920980  216504 logs.go:282] 0 containers: []
	W1126 20:19:51.920989  216504 logs.go:284] No container was found matching "kindnet"
	I1126 20:19:51.920995  216504 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:19:51.921050  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:19:51.953617  216504 cri.go:89] found id: ""
	I1126 20:19:51.953639  216504 logs.go:282] 0 containers: []
	W1126 20:19:51.953649  216504 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:19:51.953659  216504 logs.go:123] Gathering logs for kubelet ...
	I1126 20:19:51.953670  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:19:52.040696  216504 logs.go:123] Gathering logs for dmesg ...
	I1126 20:19:52.040722  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:19:52.056135  216504 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:19:52.056155  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:19:52.114315  216504 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:19:52.114334  216504 logs.go:123] Gathering logs for kube-apiserver [c117c40490d4ebb74e39a760ecb03bec1f10322fb9b123707b25ece8a272d4e8] ...
	I1126 20:19:52.114348  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c117c40490d4ebb74e39a760ecb03bec1f10322fb9b123707b25ece8a272d4e8"
	W1126 20:19:52.146151  216504 logs.go:130] failed kube-apiserver [c117c40490d4ebb74e39a760ecb03bec1f10322fb9b123707b25ece8a272d4e8]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c117c40490d4ebb74e39a760ecb03bec1f10322fb9b123707b25ece8a272d4e8" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c117c40490d4ebb74e39a760ecb03bec1f10322fb9b123707b25ece8a272d4e8": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:19:52.143994    3482 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c117c40490d4ebb74e39a760ecb03bec1f10322fb9b123707b25ece8a272d4e8\": container with ID starting with c117c40490d4ebb74e39a760ecb03bec1f10322fb9b123707b25ece8a272d4e8 not found: ID does not exist" containerID="c117c40490d4ebb74e39a760ecb03bec1f10322fb9b123707b25ece8a272d4e8"
	time="2025-11-26T20:19:52Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"c117c40490d4ebb74e39a760ecb03bec1f10322fb9b123707b25ece8a272d4e8\": container with ID starting with c117c40490d4ebb74e39a760ecb03bec1f10322fb9b123707b25ece8a272d4e8 not found: ID does not exist"
	 output: 
	** stderr ** 
	E1126 20:19:52.143994    3482 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c117c40490d4ebb74e39a760ecb03bec1f10322fb9b123707b25ece8a272d4e8\": container with ID starting with c117c40490d4ebb74e39a760ecb03bec1f10322fb9b123707b25ece8a272d4e8 not found: ID does not exist" containerID="c117c40490d4ebb74e39a760ecb03bec1f10322fb9b123707b25ece8a272d4e8"
	time="2025-11-26T20:19:52Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"c117c40490d4ebb74e39a760ecb03bec1f10322fb9b123707b25ece8a272d4e8\": container with ID starting with c117c40490d4ebb74e39a760ecb03bec1f10322fb9b123707b25ece8a272d4e8 not found: ID does not exist"
	
	** /stderr **
	I1126 20:19:52.146174  216504 logs.go:123] Gathering logs for etcd [e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280] ...
	I1126 20:19:52.146188  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280"
	I1126 20:19:52.179833  216504 logs.go:123] Gathering logs for kube-controller-manager [be4b02e604ec5b37aa81ef10164c4129b716578a9e391854144c19256393ef00] ...
	I1126 20:19:52.179872  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be4b02e604ec5b37aa81ef10164c4129b716578a9e391854144c19256393ef00"
	I1126 20:19:52.214915  216504 logs.go:123] Gathering logs for kube-apiserver [904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823] ...
	I1126 20:19:52.214938  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823"
	I1126 20:19:52.249351  216504 logs.go:123] Gathering logs for kube-scheduler [b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915] ...
	I1126 20:19:52.249375  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915"
	I1126 20:19:52.316968  216504 logs.go:123] Gathering logs for kube-controller-manager [583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e] ...
	I1126 20:19:52.316997  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e"
	I1126 20:19:52.348695  216504 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:19:52.348719  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:19:52.393447  216504 logs.go:123] Gathering logs for container status ...
	I1126 20:19:52.393479  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1126 20:19:51.456221  236328 node_ready.go:57] node "old-k8s-version-157431" has "Ready":"False" status (will retry)
	W1126 20:19:53.456358  236328 node_ready.go:57] node "old-k8s-version-157431" has "Ready":"False" status (will retry)
	I1126 20:19:52.589011  211567 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (3.54535705s)
	W1126 20:19:52.589045  211567 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:46080->[::1]:8443: read: connection reset by peer
	 output: 
	** stderr ** 
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:46080->[::1]:8443: read: connection reset by peer
	
	** /stderr **
	I1126 20:19:52.589053  211567 logs.go:123] Gathering logs for kube-apiserver [b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf] ...
	I1126 20:19:52.589066  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf"
	I1126 20:19:52.620227  211567 logs.go:123] Gathering logs for kube-scheduler [b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3] ...
	I1126 20:19:52.620252  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3"
	I1126 20:19:55.166530  211567 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1126 20:19:55.166903  211567 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1126 20:19:55.166953  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:19:55.166997  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:19:55.195204  211567 cri.go:89] found id: "b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf"
	I1126 20:19:55.195223  211567 cri.go:89] found id: ""
	I1126 20:19:55.195231  211567 logs.go:282] 1 containers: [b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf]
	I1126 20:19:55.195282  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:19:55.198949  211567 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:19:55.199011  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:19:55.225727  211567 cri.go:89] found id: ""
	I1126 20:19:55.225750  211567 logs.go:282] 0 containers: []
	W1126 20:19:55.225759  211567 logs.go:284] No container was found matching "etcd"
	I1126 20:19:55.225766  211567 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:19:55.225815  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:19:55.253509  211567 cri.go:89] found id: ""
	I1126 20:19:55.253531  211567 logs.go:282] 0 containers: []
	W1126 20:19:55.253541  211567 logs.go:284] No container was found matching "coredns"
	I1126 20:19:55.253549  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:19:55.253597  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:19:55.281881  211567 cri.go:89] found id: "b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3"
	I1126 20:19:55.281903  211567 cri.go:89] found id: ""
	I1126 20:19:55.281911  211567 logs.go:282] 1 containers: [b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3]
	I1126 20:19:55.281976  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:19:55.285752  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:19:55.285808  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:19:55.311234  211567 cri.go:89] found id: ""
	I1126 20:19:55.311257  211567 logs.go:282] 0 containers: []
	W1126 20:19:55.311266  211567 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:19:55.311273  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:19:55.311326  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:19:55.337637  211567 cri.go:89] found id: "cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e"
	I1126 20:19:55.337661  211567 cri.go:89] found id: "51387f3f558f400fa925e734509a59c5b7fb4a439a0755f775b3dc32fe473dfd"
	I1126 20:19:55.337668  211567 cri.go:89] found id: ""
	I1126 20:19:55.337677  211567 logs.go:282] 2 containers: [cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e 51387f3f558f400fa925e734509a59c5b7fb4a439a0755f775b3dc32fe473dfd]
	I1126 20:19:55.337728  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:19:55.342706  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:19:55.346706  211567 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:19:55.346763  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:19:55.374629  211567 cri.go:89] found id: ""
	I1126 20:19:55.374647  211567 logs.go:282] 0 containers: []
	W1126 20:19:55.374669  211567 logs.go:284] No container was found matching "kindnet"
	I1126 20:19:55.374680  211567 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:19:55.374730  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:19:55.400764  211567 cri.go:89] found id: ""
	I1126 20:19:55.400787  211567 logs.go:282] 0 containers: []
	W1126 20:19:55.400797  211567 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:19:55.400814  211567 logs.go:123] Gathering logs for dmesg ...
	I1126 20:19:55.400826  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:19:55.414307  211567 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:19:55.414326  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:19:55.471672  211567 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:19:55.471692  211567 logs.go:123] Gathering logs for kube-apiserver [b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf] ...
	I1126 20:19:55.471707  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf"
	I1126 20:19:55.504335  211567 logs.go:123] Gathering logs for kube-controller-manager [51387f3f558f400fa925e734509a59c5b7fb4a439a0755f775b3dc32fe473dfd] ...
	I1126 20:19:55.504361  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 51387f3f558f400fa925e734509a59c5b7fb4a439a0755f775b3dc32fe473dfd"
	I1126 20:19:55.532231  211567 logs.go:123] Gathering logs for container status ...
	I1126 20:19:55.532254  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:19:55.563004  211567 logs.go:123] Gathering logs for kubelet ...
	I1126 20:19:55.563025  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:19:55.645482  211567 logs.go:123] Gathering logs for kube-scheduler [b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3] ...
	I1126 20:19:55.645506  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3"
	I1126 20:19:55.698612  211567 logs.go:123] Gathering logs for kube-controller-manager [cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e] ...
	I1126 20:19:55.698641  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e"
	I1126 20:19:54.931511  216504 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1126 20:19:54.931880  216504 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1126 20:19:54.931928  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:19:54.931981  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:19:54.966822  216504 cri.go:89] found id: "904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823"
	I1126 20:19:54.966862  216504 cri.go:89] found id: ""
	I1126 20:19:54.966873  216504 logs.go:282] 1 containers: [904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823]
	I1126 20:19:54.966921  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:19:54.970408  216504 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:19:54.970498  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:19:55.002775  216504 cri.go:89] found id: "e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280"
	I1126 20:19:55.002792  216504 cri.go:89] found id: ""
	I1126 20:19:55.002803  216504 logs.go:282] 1 containers: [e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280]
	I1126 20:19:55.002854  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:19:55.006281  216504 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:19:55.006327  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:19:55.039049  216504 cri.go:89] found id: ""
	I1126 20:19:55.039069  216504 logs.go:282] 0 containers: []
	W1126 20:19:55.039075  216504 logs.go:284] No container was found matching "coredns"
	I1126 20:19:55.039080  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:19:55.039146  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:19:55.071966  216504 cri.go:89] found id: "b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915"
	I1126 20:19:55.071982  216504 cri.go:89] found id: ""
	I1126 20:19:55.071989  216504 logs.go:282] 1 containers: [b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915]
	I1126 20:19:55.072027  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:19:55.075515  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:19:55.075568  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:19:55.107341  216504 cri.go:89] found id: ""
	I1126 20:19:55.107362  216504 logs.go:282] 0 containers: []
	W1126 20:19:55.107368  216504 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:19:55.107374  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:19:55.107417  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:19:55.138757  216504 cri.go:89] found id: "583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e"
	I1126 20:19:55.138775  216504 cri.go:89] found id: "be4b02e604ec5b37aa81ef10164c4129b716578a9e391854144c19256393ef00"
	I1126 20:19:55.138779  216504 cri.go:89] found id: ""
	I1126 20:19:55.138787  216504 logs.go:282] 2 containers: [583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e be4b02e604ec5b37aa81ef10164c4129b716578a9e391854144c19256393ef00]
	I1126 20:19:55.138824  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:19:55.142208  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:19:55.145285  216504 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:19:55.145336  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:19:55.178533  216504 cri.go:89] found id: ""
	I1126 20:19:55.178553  216504 logs.go:282] 0 containers: []
	W1126 20:19:55.178563  216504 logs.go:284] No container was found matching "kindnet"
	I1126 20:19:55.178570  216504 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:19:55.178620  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:19:55.214062  216504 cri.go:89] found id: ""
	I1126 20:19:55.214089  216504 logs.go:282] 0 containers: []
	W1126 20:19:55.214099  216504 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:19:55.214119  216504 logs.go:123] Gathering logs for kubelet ...
	I1126 20:19:55.214134  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:19:55.307663  216504 logs.go:123] Gathering logs for kube-scheduler [b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915] ...
	I1126 20:19:55.307689  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915"
	I1126 20:19:55.386990  216504 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:19:55.387022  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:19:55.434109  216504 logs.go:123] Gathering logs for dmesg ...
	I1126 20:19:55.434145  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:19:55.450045  216504 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:19:55.450069  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:19:55.515781  216504 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:19:55.515804  216504 logs.go:123] Gathering logs for kube-apiserver [904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823] ...
	I1126 20:19:55.515821  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823"
	I1126 20:19:55.557574  216504 logs.go:123] Gathering logs for etcd [e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280] ...
	I1126 20:19:55.557607  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280"
	I1126 20:19:55.594084  216504 logs.go:123] Gathering logs for kube-controller-manager [583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e] ...
	I1126 20:19:55.594116  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e"
	I1126 20:19:55.626435  216504 logs.go:123] Gathering logs for kube-controller-manager [be4b02e604ec5b37aa81ef10164c4129b716578a9e391854144c19256393ef00] ...
	I1126 20:19:55.626470  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be4b02e604ec5b37aa81ef10164c4129b716578a9e391854144c19256393ef00"
	I1126 20:19:55.661234  216504 logs.go:123] Gathering logs for container status ...
	I1126 20:19:55.661268  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:19:58.203513  216504 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1126 20:19:58.203898  216504 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1126 20:19:58.203961  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:19:58.204020  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:19:58.237544  216504 cri.go:89] found id: "904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823"
	I1126 20:19:58.237563  216504 cri.go:89] found id: ""
	I1126 20:19:58.237572  216504 logs.go:282] 1 containers: [904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823]
	I1126 20:19:58.237626  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:19:58.241210  216504 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:19:58.241270  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:19:58.276373  216504 cri.go:89] found id: "e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280"
	I1126 20:19:58.276395  216504 cri.go:89] found id: ""
	I1126 20:19:58.276414  216504 logs.go:282] 1 containers: [e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280]
	I1126 20:19:58.276485  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:19:58.280484  216504 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:19:58.280534  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:19:58.327088  216504 cri.go:89] found id: ""
	I1126 20:19:58.327109  216504 logs.go:282] 0 containers: []
	W1126 20:19:58.327118  216504 logs.go:284] No container was found matching "coredns"
	I1126 20:19:58.327127  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:19:58.327178  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:19:58.368814  216504 cri.go:89] found id: "b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915"
	I1126 20:19:58.368835  216504 cri.go:89] found id: ""
	I1126 20:19:58.368850  216504 logs.go:282] 1 containers: [b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915]
	I1126 20:19:58.368962  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:19:58.372691  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:19:58.372748  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:19:58.410047  216504 cri.go:89] found id: ""
	I1126 20:19:58.410070  216504 logs.go:282] 0 containers: []
	W1126 20:19:58.410079  216504 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:19:58.410086  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:19:58.410143  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:19:58.446016  216504 cri.go:89] found id: "583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e"
	I1126 20:19:58.446039  216504 cri.go:89] found id: "be4b02e604ec5b37aa81ef10164c4129b716578a9e391854144c19256393ef00"
	I1126 20:19:58.446044  216504 cri.go:89] found id: ""
	I1126 20:19:58.446053  216504 logs.go:282] 2 containers: [583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e be4b02e604ec5b37aa81ef10164c4129b716578a9e391854144c19256393ef00]
	I1126 20:19:58.446102  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:19:58.449991  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:19:58.453694  216504 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:19:58.453749  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	W1126 20:19:55.956309  236328 node_ready.go:57] node "old-k8s-version-157431" has "Ready":"False" status (will retry)
	I1126 20:19:57.455731  236328 node_ready.go:49] node "old-k8s-version-157431" is "Ready"
	I1126 20:19:57.455755  236328 node_ready.go:38] duration metric: took 12.502505595s for node "old-k8s-version-157431" to be "Ready" ...
	I1126 20:19:57.455769  236328 api_server.go:52] waiting for apiserver process to appear ...
	I1126 20:19:57.455811  236328 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:19:57.467713  236328 api_server.go:72] duration metric: took 12.839504691s to wait for apiserver process to appear ...
	I1126 20:19:57.467731  236328 api_server.go:88] waiting for apiserver healthz status ...
	I1126 20:19:57.467747  236328 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1126 20:19:57.471595  236328 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1126 20:19:57.472677  236328 api_server.go:141] control plane version: v1.28.0
	I1126 20:19:57.472700  236328 api_server.go:131] duration metric: took 4.963676ms to wait for apiserver health ...
	I1126 20:19:57.472708  236328 system_pods.go:43] waiting for kube-system pods to appear ...
	I1126 20:19:57.475902  236328 system_pods.go:59] 8 kube-system pods found
	I1126 20:19:57.475932  236328 system_pods.go:61] "coredns-5dd5756b68-jhrhx" [483a52cf-1d0a-4b51-b9b1-d986b07fa545] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:19:57.475939  236328 system_pods.go:61] "etcd-old-k8s-version-157431" [6b5ff917-ffe0-4bd0-bc19-2cbbcb7511c9] Running
	I1126 20:19:57.475947  236328 system_pods.go:61] "kindnet-zlg4b" [9e7b6449-704d-42a1-863d-ec678f485d78] Running
	I1126 20:19:57.475955  236328 system_pods.go:61] "kube-apiserver-old-k8s-version-157431" [4755237d-9665-4f4a-acdb-7aad9f3a685f] Running
	I1126 20:19:57.475964  236328 system_pods.go:61] "kube-controller-manager-old-k8s-version-157431" [d139f643-1c90-4ccb-9841-d5da46929720] Running
	I1126 20:19:57.475971  236328 system_pods.go:61] "kube-proxy-qqdfx" [896fd93b-917a-42b9-92db-283923830743] Running
	I1126 20:19:57.475980  236328 system_pods.go:61] "kube-scheduler-old-k8s-version-157431" [54a1cc6d-cdf5-4041-aa7e-7edda8f78380] Running
	I1126 20:19:57.475988  236328 system_pods.go:61] "storage-provisioner" [f6d6f6e0-74c6-4708-abff-c18f6962424e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 20:19:57.476001  236328 system_pods.go:74] duration metric: took 3.28435ms to wait for pod list to return data ...
	I1126 20:19:57.476014  236328 default_sa.go:34] waiting for default service account to be created ...
	I1126 20:19:57.480975  236328 default_sa.go:45] found service account: "default"
	I1126 20:19:57.480992  236328 default_sa.go:55] duration metric: took 4.971365ms for default service account to be created ...
	I1126 20:19:57.481002  236328 system_pods.go:116] waiting for k8s-apps to be running ...
	I1126 20:19:57.484484  236328 system_pods.go:86] 8 kube-system pods found
	I1126 20:19:57.484513  236328 system_pods.go:89] "coredns-5dd5756b68-jhrhx" [483a52cf-1d0a-4b51-b9b1-d986b07fa545] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:19:57.484521  236328 system_pods.go:89] "etcd-old-k8s-version-157431" [6b5ff917-ffe0-4bd0-bc19-2cbbcb7511c9] Running
	I1126 20:19:57.484529  236328 system_pods.go:89] "kindnet-zlg4b" [9e7b6449-704d-42a1-863d-ec678f485d78] Running
	I1126 20:19:57.484535  236328 system_pods.go:89] "kube-apiserver-old-k8s-version-157431" [4755237d-9665-4f4a-acdb-7aad9f3a685f] Running
	I1126 20:19:57.484540  236328 system_pods.go:89] "kube-controller-manager-old-k8s-version-157431" [d139f643-1c90-4ccb-9841-d5da46929720] Running
	I1126 20:19:57.484545  236328 system_pods.go:89] "kube-proxy-qqdfx" [896fd93b-917a-42b9-92db-283923830743] Running
	I1126 20:19:57.484548  236328 system_pods.go:89] "kube-scheduler-old-k8s-version-157431" [54a1cc6d-cdf5-4041-aa7e-7edda8f78380] Running
	I1126 20:19:57.484555  236328 system_pods.go:89] "storage-provisioner" [f6d6f6e0-74c6-4708-abff-c18f6962424e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 20:19:57.484578  236328 retry.go:31] will retry after 255.092448ms: missing components: kube-dns
	I1126 20:19:57.743301  236328 system_pods.go:86] 8 kube-system pods found
	I1126 20:19:57.743331  236328 system_pods.go:89] "coredns-5dd5756b68-jhrhx" [483a52cf-1d0a-4b51-b9b1-d986b07fa545] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:19:57.743336  236328 system_pods.go:89] "etcd-old-k8s-version-157431" [6b5ff917-ffe0-4bd0-bc19-2cbbcb7511c9] Running
	I1126 20:19:57.743349  236328 system_pods.go:89] "kindnet-zlg4b" [9e7b6449-704d-42a1-863d-ec678f485d78] Running
	I1126 20:19:57.743353  236328 system_pods.go:89] "kube-apiserver-old-k8s-version-157431" [4755237d-9665-4f4a-acdb-7aad9f3a685f] Running
	I1126 20:19:57.743356  236328 system_pods.go:89] "kube-controller-manager-old-k8s-version-157431" [d139f643-1c90-4ccb-9841-d5da46929720] Running
	I1126 20:19:57.743360  236328 system_pods.go:89] "kube-proxy-qqdfx" [896fd93b-917a-42b9-92db-283923830743] Running
	I1126 20:19:57.743363  236328 system_pods.go:89] "kube-scheduler-old-k8s-version-157431" [54a1cc6d-cdf5-4041-aa7e-7edda8f78380] Running
	I1126 20:19:57.743368  236328 system_pods.go:89] "storage-provisioner" [f6d6f6e0-74c6-4708-abff-c18f6962424e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 20:19:57.743380  236328 retry.go:31] will retry after 243.364315ms: missing components: kube-dns
	I1126 20:19:57.991012  236328 system_pods.go:86] 8 kube-system pods found
	I1126 20:19:57.991042  236328 system_pods.go:89] "coredns-5dd5756b68-jhrhx" [483a52cf-1d0a-4b51-b9b1-d986b07fa545] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:19:57.991047  236328 system_pods.go:89] "etcd-old-k8s-version-157431" [6b5ff917-ffe0-4bd0-bc19-2cbbcb7511c9] Running
	I1126 20:19:57.991054  236328 system_pods.go:89] "kindnet-zlg4b" [9e7b6449-704d-42a1-863d-ec678f485d78] Running
	I1126 20:19:57.991059  236328 system_pods.go:89] "kube-apiserver-old-k8s-version-157431" [4755237d-9665-4f4a-acdb-7aad9f3a685f] Running
	I1126 20:19:57.991065  236328 system_pods.go:89] "kube-controller-manager-old-k8s-version-157431" [d139f643-1c90-4ccb-9841-d5da46929720] Running
	I1126 20:19:57.991069  236328 system_pods.go:89] "kube-proxy-qqdfx" [896fd93b-917a-42b9-92db-283923830743] Running
	I1126 20:19:57.991074  236328 system_pods.go:89] "kube-scheduler-old-k8s-version-157431" [54a1cc6d-cdf5-4041-aa7e-7edda8f78380] Running
	I1126 20:19:57.991081  236328 system_pods.go:89] "storage-provisioner" [f6d6f6e0-74c6-4708-abff-c18f6962424e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 20:19:57.991100  236328 retry.go:31] will retry after 313.17391ms: missing components: kube-dns
	I1126 20:19:58.310207  236328 system_pods.go:86] 8 kube-system pods found
	I1126 20:19:58.310243  236328 system_pods.go:89] "coredns-5dd5756b68-jhrhx" [483a52cf-1d0a-4b51-b9b1-d986b07fa545] Running
	I1126 20:19:58.310251  236328 system_pods.go:89] "etcd-old-k8s-version-157431" [6b5ff917-ffe0-4bd0-bc19-2cbbcb7511c9] Running
	I1126 20:19:58.310257  236328 system_pods.go:89] "kindnet-zlg4b" [9e7b6449-704d-42a1-863d-ec678f485d78] Running
	I1126 20:19:58.310262  236328 system_pods.go:89] "kube-apiserver-old-k8s-version-157431" [4755237d-9665-4f4a-acdb-7aad9f3a685f] Running
	I1126 20:19:58.310268  236328 system_pods.go:89] "kube-controller-manager-old-k8s-version-157431" [d139f643-1c90-4ccb-9841-d5da46929720] Running
	I1126 20:19:58.310273  236328 system_pods.go:89] "kube-proxy-qqdfx" [896fd93b-917a-42b9-92db-283923830743] Running
	I1126 20:19:58.310278  236328 system_pods.go:89] "kube-scheduler-old-k8s-version-157431" [54a1cc6d-cdf5-4041-aa7e-7edda8f78380] Running
	I1126 20:19:58.310288  236328 system_pods.go:89] "storage-provisioner" [f6d6f6e0-74c6-4708-abff-c18f6962424e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 20:19:58.310318  236328 system_pods.go:126] duration metric: took 829.295613ms to wait for k8s-apps to be running ...
	I1126 20:19:58.310333  236328 system_svc.go:44] waiting for kubelet service to be running ....
	I1126 20:19:58.310385  236328 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:19:58.326860  236328 system_svc.go:56] duration metric: took 16.521313ms WaitForService to wait for kubelet
	I1126 20:19:58.326889  236328 kubeadm.go:587] duration metric: took 13.698682053s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1126 20:19:58.326908  236328 node_conditions.go:102] verifying NodePressure condition ...
	I1126 20:19:58.329551  236328 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1126 20:19:58.329583  236328 node_conditions.go:123] node cpu capacity is 8
	I1126 20:19:58.329604  236328 node_conditions.go:105] duration metric: took 2.690184ms to run NodePressure ...
	I1126 20:19:58.329624  236328 start.go:242] waiting for startup goroutines ...
	I1126 20:19:58.329637  236328 start.go:247] waiting for cluster config update ...
	I1126 20:19:58.329658  236328 start.go:256] writing updated cluster config ...
	I1126 20:19:58.330045  236328 ssh_runner.go:195] Run: rm -f paused
	I1126 20:19:58.333867  236328 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 20:19:58.338367  236328 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-jhrhx" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:19:58.343092  236328 pod_ready.go:94] pod "coredns-5dd5756b68-jhrhx" is "Ready"
	I1126 20:19:58.343113  236328 pod_ready.go:86] duration metric: took 4.723017ms for pod "coredns-5dd5756b68-jhrhx" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:19:58.346337  236328 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-157431" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:19:58.350680  236328 pod_ready.go:94] pod "etcd-old-k8s-version-157431" is "Ready"
	I1126 20:19:58.350696  236328 pod_ready.go:86] duration metric: took 4.339312ms for pod "etcd-old-k8s-version-157431" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:19:58.355704  236328 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-157431" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:19:58.359665  236328 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-157431" is "Ready"
	I1126 20:19:58.359689  236328 pod_ready.go:86] duration metric: took 3.964754ms for pod "kube-apiserver-old-k8s-version-157431" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:19:58.362281  236328 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-157431" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:19:58.738786  236328 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-157431" is "Ready"
	I1126 20:19:58.738815  236328 pod_ready.go:86] duration metric: took 376.518325ms for pod "kube-controller-manager-old-k8s-version-157431" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:19:58.938102  236328 pod_ready.go:83] waiting for pod "kube-proxy-qqdfx" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:19:59.337836  236328 pod_ready.go:94] pod "kube-proxy-qqdfx" is "Ready"
	I1126 20:19:59.337858  236328 pod_ready.go:86] duration metric: took 399.730072ms for pod "kube-proxy-qqdfx" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:19:59.538129  236328 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-157431" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:19:59.937550  236328 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-157431" is "Ready"
	I1126 20:19:59.937574  236328 pod_ready.go:86] duration metric: took 399.423931ms for pod "kube-scheduler-old-k8s-version-157431" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:19:59.937586  236328 pod_ready.go:40] duration metric: took 1.603692885s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 20:19:59.981699  236328 start.go:625] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1126 20:19:59.983136  236328 out.go:203] 
	W1126 20:19:59.984398  236328 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1126 20:19:59.985480  236328 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1126 20:19:59.986803  236328 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-157431" cluster and "default" namespace by default
	I1126 20:19:55.724425  211567 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:19:55.724447  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:19:58.273524  211567 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1126 20:19:58.273901  211567 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1126 20:19:58.273955  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:19:58.274008  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:19:58.310887  211567 cri.go:89] found id: "b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf"
	I1126 20:19:58.310911  211567 cri.go:89] found id: ""
	I1126 20:19:58.310935  211567 logs.go:282] 1 containers: [b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf]
	I1126 20:19:58.310992  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:19:58.316179  211567 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:19:58.316268  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:19:58.348995  211567 cri.go:89] found id: ""
	I1126 20:19:58.349020  211567 logs.go:282] 0 containers: []
	W1126 20:19:58.349029  211567 logs.go:284] No container was found matching "etcd"
	I1126 20:19:58.349036  211567 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:19:58.349085  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:19:58.377542  211567 cri.go:89] found id: ""
	I1126 20:19:58.377566  211567 logs.go:282] 0 containers: []
	W1126 20:19:58.377576  211567 logs.go:284] No container was found matching "coredns"
	I1126 20:19:58.377584  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:19:58.377627  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:19:58.404579  211567 cri.go:89] found id: "b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3"
	I1126 20:19:58.404594  211567 cri.go:89] found id: ""
	I1126 20:19:58.404602  211567 logs.go:282] 1 containers: [b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3]
	I1126 20:19:58.404643  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:19:58.408941  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:19:58.409001  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:19:58.435395  211567 cri.go:89] found id: ""
	I1126 20:19:58.435416  211567 logs.go:282] 0 containers: []
	W1126 20:19:58.435425  211567 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:19:58.435432  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:19:58.435497  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:19:58.463122  211567 cri.go:89] found id: "cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e"
	I1126 20:19:58.463145  211567 cri.go:89] found id: "51387f3f558f400fa925e734509a59c5b7fb4a439a0755f775b3dc32fe473dfd"
	I1126 20:19:58.463151  211567 cri.go:89] found id: ""
	I1126 20:19:58.463159  211567 logs.go:282] 2 containers: [cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e 51387f3f558f400fa925e734509a59c5b7fb4a439a0755f775b3dc32fe473dfd]
	I1126 20:19:58.463203  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:19:58.467075  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:19:58.470639  211567 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:19:58.470695  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:19:58.498072  211567 cri.go:89] found id: ""
	I1126 20:19:58.498089  211567 logs.go:282] 0 containers: []
	W1126 20:19:58.498096  211567 logs.go:284] No container was found matching "kindnet"
	I1126 20:19:58.498102  211567 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:19:58.498148  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:19:58.525421  211567 cri.go:89] found id: ""
	I1126 20:19:58.525443  211567 logs.go:282] 0 containers: []
	W1126 20:19:58.525449  211567 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:19:58.525479  211567 logs.go:123] Gathering logs for kube-controller-manager [51387f3f558f400fa925e734509a59c5b7fb4a439a0755f775b3dc32fe473dfd] ...
	I1126 20:19:58.525498  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 51387f3f558f400fa925e734509a59c5b7fb4a439a0755f775b3dc32fe473dfd"
	I1126 20:19:58.553435  211567 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:19:58.553482  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:19:58.601525  211567 logs.go:123] Gathering logs for kubelet ...
	I1126 20:19:58.601547  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:19:58.690078  211567 logs.go:123] Gathering logs for kube-controller-manager [cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e] ...
	I1126 20:19:58.690109  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e"
	I1126 20:19:58.716719  211567 logs.go:123] Gathering logs for container status ...
	I1126 20:19:58.716743  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:19:58.747763  211567 logs.go:123] Gathering logs for dmesg ...
	I1126 20:19:58.747793  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:19:58.761922  211567 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:19:58.761946  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:19:58.818955  211567 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:19:58.818988  211567 logs.go:123] Gathering logs for kube-apiserver [b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf] ...
	I1126 20:19:58.819003  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf"
	I1126 20:19:58.851541  211567 logs.go:123] Gathering logs for kube-scheduler [b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3] ...
	I1126 20:19:58.851567  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3"
	I1126 20:19:58.488818  216504 cri.go:89] found id: ""
	I1126 20:19:58.488840  216504 logs.go:282] 0 containers: []
	W1126 20:19:58.488848  216504 logs.go:284] No container was found matching "kindnet"
	I1126 20:19:58.488860  216504 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:19:58.488921  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:19:58.526050  216504 cri.go:89] found id: ""
	I1126 20:19:58.526071  216504 logs.go:282] 0 containers: []
	W1126 20:19:58.526080  216504 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:19:58.526093  216504 logs.go:123] Gathering logs for kubelet ...
	I1126 20:19:58.526108  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:19:58.615526  216504 logs.go:123] Gathering logs for kube-apiserver [904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823] ...
	I1126 20:19:58.615555  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823"
	I1126 20:19:58.655203  216504 logs.go:123] Gathering logs for kube-scheduler [b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915] ...
	I1126 20:19:58.655226  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915"
	I1126 20:19:58.724860  216504 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:19:58.724884  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:19:58.772065  216504 logs.go:123] Gathering logs for dmesg ...
	I1126 20:19:58.772087  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:19:58.787795  216504 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:19:58.787817  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:19:58.854320  216504 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:19:58.854338  216504 logs.go:123] Gathering logs for etcd [e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280] ...
	I1126 20:19:58.854351  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280"
	I1126 20:19:58.889440  216504 logs.go:123] Gathering logs for kube-controller-manager [583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e] ...
	I1126 20:19:58.889481  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e"
	I1126 20:19:58.922780  216504 logs.go:123] Gathering logs for kube-controller-manager [be4b02e604ec5b37aa81ef10164c4129b716578a9e391854144c19256393ef00] ...
	I1126 20:19:58.922805  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be4b02e604ec5b37aa81ef10164c4129b716578a9e391854144c19256393ef00"
	I1126 20:19:58.956253  216504 logs.go:123] Gathering logs for container status ...
	I1126 20:19:58.956279  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:20:01.494528  216504 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1126 20:20:01.494927  216504 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1126 20:20:01.494983  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:20:01.495033  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:20:01.534919  216504 cri.go:89] found id: "904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823"
	I1126 20:20:01.534941  216504 cri.go:89] found id: ""
	I1126 20:20:01.534951  216504 logs.go:282] 1 containers: [904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823]
	I1126 20:20:01.535001  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:01.539151  216504 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:20:01.539222  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:20:01.576427  216504 cri.go:89] found id: "e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280"
	I1126 20:20:01.576446  216504 cri.go:89] found id: ""
	I1126 20:20:01.576477  216504 logs.go:282] 1 containers: [e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280]
	I1126 20:20:01.576528  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:01.580394  216504 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:20:01.580438  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:20:01.617502  216504 cri.go:89] found id: ""
	I1126 20:20:01.617528  216504 logs.go:282] 0 containers: []
	W1126 20:20:01.617535  216504 logs.go:284] No container was found matching "coredns"
	I1126 20:20:01.617541  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:20:01.617586  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:20:01.653900  216504 cri.go:89] found id: "b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915"
	I1126 20:20:01.653925  216504 cri.go:89] found id: ""
	I1126 20:20:01.653936  216504 logs.go:282] 1 containers: [b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915]
	I1126 20:20:01.653990  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:01.657692  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:20:01.657760  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:20:01.693606  216504 cri.go:89] found id: ""
	I1126 20:20:01.693629  216504 logs.go:282] 0 containers: []
	W1126 20:20:01.693638  216504 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:20:01.693646  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:20:01.693718  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:20:01.731803  216504 cri.go:89] found id: "583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e"
	I1126 20:20:01.731877  216504 cri.go:89] found id: "be4b02e604ec5b37aa81ef10164c4129b716578a9e391854144c19256393ef00"
	I1126 20:20:01.731890  216504 cri.go:89] found id: ""
	I1126 20:20:01.731900  216504 logs.go:282] 2 containers: [583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e be4b02e604ec5b37aa81ef10164c4129b716578a9e391854144c19256393ef00]
	I1126 20:20:01.732059  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:01.736076  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:01.739433  216504 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:20:01.739520  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:20:01.774788  216504 cri.go:89] found id: ""
	I1126 20:20:01.774814  216504 logs.go:282] 0 containers: []
	W1126 20:20:01.774824  216504 logs.go:284] No container was found matching "kindnet"
	I1126 20:20:01.774834  216504 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:20:01.774891  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:20:01.812525  216504 cri.go:89] found id: ""
	I1126 20:20:01.812547  216504 logs.go:282] 0 containers: []
	W1126 20:20:01.812554  216504 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:20:01.812566  216504 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:20:01.812589  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:20:01.857420  216504 logs.go:123] Gathering logs for dmesg ...
	I1126 20:20:01.857445  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:20:01.872286  216504 logs.go:123] Gathering logs for kube-apiserver [904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823] ...
	I1126 20:20:01.872306  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823"
	I1126 20:20:01.910660  216504 logs.go:123] Gathering logs for kube-controller-manager [583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e] ...
	I1126 20:20:01.910683  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e"
	I1126 20:20:01.946251  216504 logs.go:123] Gathering logs for container status ...
	I1126 20:20:01.946281  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:20:01.982793  216504 logs.go:123] Gathering logs for kubelet ...
	I1126 20:20:01.982816  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:20:02.076897  216504 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:20:02.076924  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:20:02.134323  216504 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:20:02.134340  216504 logs.go:123] Gathering logs for etcd [e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280] ...
	I1126 20:20:02.134351  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280"
	I1126 20:20:02.167016  216504 logs.go:123] Gathering logs for kube-scheduler [b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915] ...
	I1126 20:20:02.167044  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915"
	I1126 20:20:02.236616  216504 logs.go:123] Gathering logs for kube-controller-manager [be4b02e604ec5b37aa81ef10164c4129b716578a9e391854144c19256393ef00] ...
	I1126 20:20:02.236643  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be4b02e604ec5b37aa81ef10164c4129b716578a9e391854144c19256393ef00"
	I1126 20:20:01.401508  211567 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1126 20:20:01.401904  211567 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1126 20:20:01.401958  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:20:01.402003  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:20:01.428547  211567 cri.go:89] found id: "b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf"
	I1126 20:20:01.428569  211567 cri.go:89] found id: ""
	I1126 20:20:01.428577  211567 logs.go:282] 1 containers: [b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf]
	I1126 20:20:01.428622  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:20:01.432425  211567 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:20:01.432493  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:20:01.457588  211567 cri.go:89] found id: ""
	I1126 20:20:01.457611  211567 logs.go:282] 0 containers: []
	W1126 20:20:01.457617  211567 logs.go:284] No container was found matching "etcd"
	I1126 20:20:01.457623  211567 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:20:01.457666  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:20:01.482778  211567 cri.go:89] found id: ""
	I1126 20:20:01.482804  211567 logs.go:282] 0 containers: []
	W1126 20:20:01.482814  211567 logs.go:284] No container was found matching "coredns"
	I1126 20:20:01.482821  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:20:01.482872  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:20:01.510903  211567 cri.go:89] found id: "b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3"
	I1126 20:20:01.510925  211567 cri.go:89] found id: ""
	I1126 20:20:01.510935  211567 logs.go:282] 1 containers: [b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3]
	I1126 20:20:01.510976  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:20:01.515027  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:20:01.515075  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:20:01.544080  211567 cri.go:89] found id: ""
	I1126 20:20:01.544099  211567 logs.go:282] 0 containers: []
	W1126 20:20:01.544107  211567 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:20:01.544120  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:20:01.544157  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:20:01.570618  211567 cri.go:89] found id: "cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e"
	I1126 20:20:01.570644  211567 cri.go:89] found id: "51387f3f558f400fa925e734509a59c5b7fb4a439a0755f775b3dc32fe473dfd"
	I1126 20:20:01.570650  211567 cri.go:89] found id: ""
	I1126 20:20:01.570659  211567 logs.go:282] 2 containers: [cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e 51387f3f558f400fa925e734509a59c5b7fb4a439a0755f775b3dc32fe473dfd]
	I1126 20:20:01.570716  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:20:01.575182  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:20:01.579071  211567 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:20:01.579117  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:20:01.605821  211567 cri.go:89] found id: ""
	I1126 20:20:01.605849  211567 logs.go:282] 0 containers: []
	W1126 20:20:01.605859  211567 logs.go:284] No container was found matching "kindnet"
	I1126 20:20:01.605868  211567 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:20:01.605928  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:20:01.634335  211567 cri.go:89] found id: ""
	I1126 20:20:01.634358  211567 logs.go:282] 0 containers: []
	W1126 20:20:01.634368  211567 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:20:01.634385  211567 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:20:01.634398  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:20:01.692511  211567 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:20:01.692538  211567 logs.go:123] Gathering logs for kube-scheduler [b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3] ...
	I1126 20:20:01.692555  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3"
	I1126 20:20:01.747242  211567 logs.go:123] Gathering logs for kube-controller-manager [cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e] ...
	I1126 20:20:01.747273  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e"
	I1126 20:20:01.775418  211567 logs.go:123] Gathering logs for kube-controller-manager [51387f3f558f400fa925e734509a59c5b7fb4a439a0755f775b3dc32fe473dfd] ...
	I1126 20:20:01.775441  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 51387f3f558f400fa925e734509a59c5b7fb4a439a0755f775b3dc32fe473dfd"
	I1126 20:20:01.803405  211567 logs.go:123] Gathering logs for kubelet ...
	I1126 20:20:01.803445  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:20:01.887070  211567 logs.go:123] Gathering logs for dmesg ...
	I1126 20:20:01.887095  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:20:01.902069  211567 logs.go:123] Gathering logs for kube-apiserver [b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf] ...
	I1126 20:20:01.902096  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf"
	I1126 20:20:01.936527  211567 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:20:01.936550  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:20:01.986171  211567 logs.go:123] Gathering logs for container status ...
	I1126 20:20:01.986201  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:20:04.519832  211567 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1126 20:20:04.520264  211567 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1126 20:20:04.520319  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:20:04.520373  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:20:04.545729  211567 cri.go:89] found id: "b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf"
	I1126 20:20:04.545748  211567 cri.go:89] found id: ""
	I1126 20:20:04.545755  211567 logs.go:282] 1 containers: [b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf]
	I1126 20:20:04.545808  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:20:04.549447  211567 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:20:04.549526  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:20:04.574950  211567 cri.go:89] found id: ""
	I1126 20:20:04.574968  211567 logs.go:282] 0 containers: []
	W1126 20:20:04.574975  211567 logs.go:284] No container was found matching "etcd"
	I1126 20:20:04.574980  211567 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:20:04.575017  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:20:04.599869  211567 cri.go:89] found id: ""
	I1126 20:20:04.599893  211567 logs.go:282] 0 containers: []
	W1126 20:20:04.599902  211567 logs.go:284] No container was found matching "coredns"
	I1126 20:20:04.599909  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:20:04.599960  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:20:04.624994  211567 cri.go:89] found id: "b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3"
	I1126 20:20:04.625011  211567 cri.go:89] found id: ""
	I1126 20:20:04.625017  211567 logs.go:282] 1 containers: [b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3]
	I1126 20:20:04.625060  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:20:04.628532  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:20:04.628589  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:20:04.652627  211567 cri.go:89] found id: ""
	I1126 20:20:04.652649  211567 logs.go:282] 0 containers: []
	W1126 20:20:04.652659  211567 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:20:04.652665  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:20:04.652705  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:20:04.677200  211567 cri.go:89] found id: "cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e"
	I1126 20:20:04.677215  211567 cri.go:89] found id: ""
	I1126 20:20:04.677221  211567 logs.go:282] 1 containers: [cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e]
	I1126 20:20:04.677259  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:20:04.680763  211567 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:20:04.680832  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:20:04.705406  211567 cri.go:89] found id: ""
	I1126 20:20:04.705431  211567 logs.go:282] 0 containers: []
	W1126 20:20:04.705438  211567 logs.go:284] No container was found matching "kindnet"
	I1126 20:20:04.705443  211567 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:20:04.705495  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:20:04.729098  211567 cri.go:89] found id: ""
	I1126 20:20:04.729115  211567 logs.go:282] 0 containers: []
	W1126 20:20:04.729121  211567 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:20:04.729135  211567 logs.go:123] Gathering logs for dmesg ...
	I1126 20:20:04.729144  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:20:04.742646  211567 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:20:04.742667  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:20:04.797480  211567 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:20:04.797506  211567 logs.go:123] Gathering logs for kube-apiserver [b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf] ...
	I1126 20:20:04.797531  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf"
	I1126 20:20:04.828815  211567 logs.go:123] Gathering logs for kube-scheduler [b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3] ...
	I1126 20:20:04.828838  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3"
	I1126 20:20:04.894067  211567 logs.go:123] Gathering logs for kube-controller-manager [cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e] ...
	I1126 20:20:04.894091  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e"
	I1126 20:20:04.920222  211567 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:20:04.920251  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:20:04.975250  211567 logs.go:123] Gathering logs for container status ...
	I1126 20:20:04.975272  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:20:05.005245  211567 logs.go:123] Gathering logs for kubelet ...
	I1126 20:20:05.005272  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:20:04.770525  216504 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1126 20:20:04.770954  216504 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1126 20:20:04.771018  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:20:04.771072  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:20:04.808333  216504 cri.go:89] found id: "904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823"
	I1126 20:20:04.808355  216504 cri.go:89] found id: ""
	I1126 20:20:04.808365  216504 logs.go:282] 1 containers: [904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823]
	I1126 20:20:04.808407  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:04.812228  216504 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:20:04.812278  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:20:04.847808  216504 cri.go:89] found id: "e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280"
	I1126 20:20:04.847830  216504 cri.go:89] found id: ""
	I1126 20:20:04.847840  216504 logs.go:282] 1 containers: [e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280]
	I1126 20:20:04.847888  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:04.851623  216504 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:20:04.851681  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:20:04.892972  216504 cri.go:89] found id: ""
	I1126 20:20:04.892996  216504 logs.go:282] 0 containers: []
	W1126 20:20:04.893005  216504 logs.go:284] No container was found matching "coredns"
	I1126 20:20:04.893012  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:20:04.893067  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:20:04.927178  216504 cri.go:89] found id: "b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915"
	I1126 20:20:04.927196  216504 cri.go:89] found id: ""
	I1126 20:20:04.927203  216504 logs.go:282] 1 containers: [b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915]
	I1126 20:20:04.927254  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:04.931170  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:20:04.931231  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:20:04.969441  216504 cri.go:89] found id: ""
	I1126 20:20:04.969475  216504 logs.go:282] 0 containers: []
	W1126 20:20:04.969484  216504 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:20:04.969492  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:20:04.969550  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:20:05.004793  216504 cri.go:89] found id: "583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e"
	I1126 20:20:05.004819  216504 cri.go:89] found id: ""
	I1126 20:20:05.004829  216504 logs.go:282] 1 containers: [583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e]
	I1126 20:20:05.004884  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:05.008769  216504 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:20:05.008834  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:20:05.044843  216504 cri.go:89] found id: ""
	I1126 20:20:05.044869  216504 logs.go:282] 0 containers: []
	W1126 20:20:05.044878  216504 logs.go:284] No container was found matching "kindnet"
	I1126 20:20:05.044886  216504 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:20:05.044938  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:20:05.076968  216504 cri.go:89] found id: ""
	I1126 20:20:05.076990  216504 logs.go:282] 0 containers: []
	W1126 20:20:05.076999  216504 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:20:05.077016  216504 logs.go:123] Gathering logs for kubelet ...
	I1126 20:20:05.077033  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:20:05.160094  216504 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:20:05.160121  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:20:05.218621  216504 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:20:05.218645  216504 logs.go:123] Gathering logs for kube-apiserver [904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823] ...
	I1126 20:20:05.218659  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823"
	I1126 20:20:05.254807  216504 logs.go:123] Gathering logs for etcd [e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280] ...
	I1126 20:20:05.254832  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280"
	I1126 20:20:05.288341  216504 logs.go:123] Gathering logs for kube-controller-manager [583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e] ...
	I1126 20:20:05.288368  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e"
	I1126 20:20:05.321807  216504 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:20:05.321831  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:20:05.366412  216504 logs.go:123] Gathering logs for dmesg ...
	I1126 20:20:05.366436  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:20:05.381144  216504 logs.go:123] Gathering logs for kube-scheduler [b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915] ...
	I1126 20:20:05.381170  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915"
	I1126 20:20:05.450242  216504 logs.go:123] Gathering logs for container status ...
	I1126 20:20:05.450268  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:20:07.988522  216504 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1126 20:20:07.988998  216504 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1126 20:20:07.989060  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:20:07.989117  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:20:08.025494  216504 cri.go:89] found id: "904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823"
	I1126 20:20:08.025515  216504 cri.go:89] found id: ""
	I1126 20:20:08.025523  216504 logs.go:282] 1 containers: [904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823]
	I1126 20:20:08.025587  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:08.029402  216504 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:20:08.029473  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:20:08.065082  216504 cri.go:89] found id: "e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280"
	I1126 20:20:08.065112  216504 cri.go:89] found id: ""
	I1126 20:20:08.065123  216504 logs.go:282] 1 containers: [e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280]
	I1126 20:20:08.065178  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:08.069506  216504 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:20:08.069565  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:20:08.105253  216504 cri.go:89] found id: ""
	I1126 20:20:08.105272  216504 logs.go:282] 0 containers: []
	W1126 20:20:08.105279  216504 logs.go:284] No container was found matching "coredns"
	I1126 20:20:08.105284  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:20:08.105335  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:20:08.139767  216504 cri.go:89] found id: "b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915"
	I1126 20:20:08.139793  216504 cri.go:89] found id: ""
	I1126 20:20:08.139803  216504 logs.go:282] 1 containers: [b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915]
	I1126 20:20:08.139874  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:08.143504  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:20:08.143557  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:20:08.177925  216504 cri.go:89] found id: ""
	I1126 20:20:08.177951  216504 logs.go:282] 0 containers: []
	W1126 20:20:08.177960  216504 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:20:08.177969  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:20:08.178023  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:20:08.213279  216504 cri.go:89] found id: "583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e"
	I1126 20:20:08.213304  216504 cri.go:89] found id: ""
	I1126 20:20:08.213316  216504 logs.go:282] 1 containers: [583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e]
	I1126 20:20:08.213375  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:08.217063  216504 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:20:08.217125  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:20:08.254351  216504 cri.go:89] found id: ""
	I1126 20:20:08.254374  216504 logs.go:282] 0 containers: []
	W1126 20:20:08.254394  216504 logs.go:284] No container was found matching "kindnet"
	I1126 20:20:08.254401  216504 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:20:08.254479  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:20:08.294002  216504 cri.go:89] found id: ""
	I1126 20:20:08.294027  216504 logs.go:282] 0 containers: []
	W1126 20:20:08.294037  216504 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:20:08.294053  216504 logs.go:123] Gathering logs for kube-controller-manager [583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e] ...
	I1126 20:20:08.294070  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e"
	I1126 20:20:08.332647  216504 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:20:08.332678  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:20:08.380420  216504 logs.go:123] Gathering logs for kubelet ...
	I1126 20:20:08.380447  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	
	
	==> CRI-O <==
	Nov 26 20:19:57 old-k8s-version-157431 crio[775]: time="2025-11-26T20:19:57.343627318Z" level=info msg="Starting container: 007b5433fe6ebdf1f0759b32929574e9badf1e18cb24e4fcea3e07949489ae68" id=059ccae3-9ca2-4380-8250-001efd1c8205 name=/runtime.v1.RuntimeService/StartContainer
	Nov 26 20:19:57 old-k8s-version-157431 crio[775]: time="2025-11-26T20:19:57.345769212Z" level=info msg="Started container" PID=2176 containerID=007b5433fe6ebdf1f0759b32929574e9badf1e18cb24e4fcea3e07949489ae68 description=kube-system/coredns-5dd5756b68-jhrhx/coredns id=059ccae3-9ca2-4380-8250-001efd1c8205 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a61af3094b1df5b915bb7a2c53d962c3985dcfc16612ff4b906691b6fb9f3d39
	Nov 26 20:20:00 old-k8s-version-157431 crio[775]: time="2025-11-26T20:20:00.469869545Z" level=info msg="Running pod sandbox: default/busybox/POD" id=83fbb453-487f-4a35-8e19-f97465fa5216 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 26 20:20:00 old-k8s-version-157431 crio[775]: time="2025-11-26T20:20:00.469933082Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:20:00 old-k8s-version-157431 crio[775]: time="2025-11-26T20:20:00.474872089Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:b6691915b2a79608de68ed0bbef9512f79508aaf616fa956cf8ca5f76f2d53e6 UID:d6c41f35-cc7b-423c-b8e2-76531e7a8b3b NetNS:/var/run/netns/e76e2627-bf85-462f-bc70-4ff727e28b59 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000622560}] Aliases:map[]}"
	Nov 26 20:20:00 old-k8s-version-157431 crio[775]: time="2025-11-26T20:20:00.474907289Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 26 20:20:00 old-k8s-version-157431 crio[775]: time="2025-11-26T20:20:00.483764564Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:b6691915b2a79608de68ed0bbef9512f79508aaf616fa956cf8ca5f76f2d53e6 UID:d6c41f35-cc7b-423c-b8e2-76531e7a8b3b NetNS:/var/run/netns/e76e2627-bf85-462f-bc70-4ff727e28b59 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000622560}] Aliases:map[]}"
	Nov 26 20:20:00 old-k8s-version-157431 crio[775]: time="2025-11-26T20:20:00.483925088Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 26 20:20:00 old-k8s-version-157431 crio[775]: time="2025-11-26T20:20:00.484678968Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 26 20:20:00 old-k8s-version-157431 crio[775]: time="2025-11-26T20:20:00.485538885Z" level=info msg="Ran pod sandbox b6691915b2a79608de68ed0bbef9512f79508aaf616fa956cf8ca5f76f2d53e6 with infra container: default/busybox/POD" id=83fbb453-487f-4a35-8e19-f97465fa5216 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 26 20:20:00 old-k8s-version-157431 crio[775]: time="2025-11-26T20:20:00.486628435Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b3dbd999-d0d3-4814-9520-d6501fd1821b name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:20:00 old-k8s-version-157431 crio[775]: time="2025-11-26T20:20:00.486753682Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=b3dbd999-d0d3-4814-9520-d6501fd1821b name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:20:00 old-k8s-version-157431 crio[775]: time="2025-11-26T20:20:00.486800849Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=b3dbd999-d0d3-4814-9520-d6501fd1821b name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:20:00 old-k8s-version-157431 crio[775]: time="2025-11-26T20:20:00.487270851Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0e3c66ec-2d45-4a38-8029-8e740834e80d name=/runtime.v1.ImageService/PullImage
	Nov 26 20:20:00 old-k8s-version-157431 crio[775]: time="2025-11-26T20:20:00.490543845Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 26 20:20:01 old-k8s-version-157431 crio[775]: time="2025-11-26T20:20:01.225134646Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=0e3c66ec-2d45-4a38-8029-8e740834e80d name=/runtime.v1.ImageService/PullImage
	Nov 26 20:20:01 old-k8s-version-157431 crio[775]: time="2025-11-26T20:20:01.225835824Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=92858490-9068-43c3-a478-abe1303f40b6 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:20:01 old-k8s-version-157431 crio[775]: time="2025-11-26T20:20:01.228845614Z" level=info msg="Creating container: default/busybox/busybox" id=cb8e0d2f-3400-4589-b307-cfe34d32084a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:20:01 old-k8s-version-157431 crio[775]: time="2025-11-26T20:20:01.228969658Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:20:01 old-k8s-version-157431 crio[775]: time="2025-11-26T20:20:01.233402096Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:20:01 old-k8s-version-157431 crio[775]: time="2025-11-26T20:20:01.233876766Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:20:01 old-k8s-version-157431 crio[775]: time="2025-11-26T20:20:01.26550866Z" level=info msg="Created container f251d6df7c745aeb9829fe8b6915032e4a51cb4b738d962293aee3ecf593ea93: default/busybox/busybox" id=cb8e0d2f-3400-4589-b307-cfe34d32084a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:20:01 old-k8s-version-157431 crio[775]: time="2025-11-26T20:20:01.265946109Z" level=info msg="Starting container: f251d6df7c745aeb9829fe8b6915032e4a51cb4b738d962293aee3ecf593ea93" id=fe08d53a-190c-472d-be19-fade43c5f7b5 name=/runtime.v1.RuntimeService/StartContainer
	Nov 26 20:20:01 old-k8s-version-157431 crio[775]: time="2025-11-26T20:20:01.267684287Z" level=info msg="Started container" PID=2255 containerID=f251d6df7c745aeb9829fe8b6915032e4a51cb4b738d962293aee3ecf593ea93 description=default/busybox/busybox id=fe08d53a-190c-472d-be19-fade43c5f7b5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b6691915b2a79608de68ed0bbef9512f79508aaf616fa956cf8ca5f76f2d53e6
	Nov 26 20:20:08 old-k8s-version-157431 crio[775]: time="2025-11-26T20:20:08.255026584Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	f251d6df7c745       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   b6691915b2a79       busybox                                          default
	007b5433fe6eb       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      12 seconds ago      Running             coredns                   0                   a61af3094b1df       coredns-5dd5756b68-jhrhx                         kube-system
	17c6e96e8ac8b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   2491682ed5800       storage-provisioner                              kube-system
	e789126519a5f       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    23 seconds ago      Running             kindnet-cni               0                   cd20ca746906e       kindnet-zlg4b                                    kube-system
	33b9c62a3d8bf       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      25 seconds ago      Running             kube-proxy                0                   c49c93c274e00       kube-proxy-qqdfx                                 kube-system
	83905e74dc682       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      42 seconds ago      Running             kube-controller-manager   0                   d59cce8c94d60       kube-controller-manager-old-k8s-version-157431   kube-system
	b41ebfdd9122d       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      42 seconds ago      Running             etcd                      0                   9cfc33f80862d       etcd-old-k8s-version-157431                      kube-system
	140b58a86cdf2       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      42 seconds ago      Running             kube-apiserver            0                   58e20dc4e9565       kube-apiserver-old-k8s-version-157431            kube-system
	4e5fb4baf77c5       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      42 seconds ago      Running             kube-scheduler            0                   a78eadf2833e6       kube-scheduler-old-k8s-version-157431            kube-system
	
	
	==> coredns [007b5433fe6ebdf1f0759b32929574e9badf1e18cb24e4fcea3e07949489ae68] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:58164 - 21897 "HINFO IN 9110434122362843477.1407792245248746338. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.064906947s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-157431
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-157431
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970
	                    minikube.k8s.io/name=old-k8s-version-157431
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_26T20_19_31_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 26 Nov 2025 20:19:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-157431
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 26 Nov 2025 20:20:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 26 Nov 2025 20:20:01 +0000   Wed, 26 Nov 2025 20:19:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 26 Nov 2025 20:20:01 +0000   Wed, 26 Nov 2025 20:19:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 26 Nov 2025 20:20:01 +0000   Wed, 26 Nov 2025 20:19:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 26 Nov 2025 20:20:01 +0000   Wed, 26 Nov 2025 20:19:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-157431
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                55f945af-c138-4761-b59d-13bed6931065
	  Boot ID:                    4ab83601-3d95-4490-96bc-14b416ba2714
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-5dd5756b68-jhrhx                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-old-k8s-version-157431                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         38s
	  kube-system                 kindnet-zlg4b                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-old-k8s-version-157431             250m (3%)     0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-controller-manager-old-k8s-version-157431    200m (2%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-proxy-qqdfx                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-old-k8s-version-157431             100m (1%)     0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 25s   kube-proxy       
	  Normal  Starting                 38s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  38s   kubelet          Node old-k8s-version-157431 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    38s   kubelet          Node old-k8s-version-157431 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     38s   kubelet          Node old-k8s-version-157431 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s   node-controller  Node old-k8s-version-157431 event: Registered Node old-k8s-version-157431 in Controller
	  Normal  NodeReady                13s   kubelet          Node old-k8s-version-157431 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.091832] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023778] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.010162] kauditd_printk_skb: 47 callbacks suppressed
	[Nov26 19:37] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.011177] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.022896] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.024869] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.022915] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.023864] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +2.047793] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +4.032568] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +8.126198] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[ +16.382331] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[Nov26 19:38] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	
	
	==> etcd [b41ebfdd9122d5137b10a9e3c9cec812be627b957b6ad709a1901f66fa6a19a1] <==
	{"level":"info","ts":"2025-11-26T20:19:26.861347Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-11-26T20:19:26.861518Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-11-26T20:19:26.863632Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-26T20:19:26.86372Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-26T20:19:26.863754Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-26T20:19:26.863916Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-26T20:19:26.863952Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-26T20:19:27.452178Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-11-26T20:19:27.452227Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-11-26T20:19:27.452259Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-11-26T20:19:27.452278Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-11-26T20:19:27.452291Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-26T20:19:27.452303Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-11-26T20:19:27.452316Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-26T20:19:27.453155Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-26T20:19:27.453644Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-26T20:19:27.453659Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-26T20:19:27.453644Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-157431 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-26T20:19:27.453776Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-26T20:19:27.453869Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-26T20:19:27.453904Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-26T20:19:27.453924Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-26T20:19:27.453948Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-26T20:19:27.455024Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-26T20:19:27.455041Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	
	
	==> kernel <==
	 20:20:09 up  1:02,  0 user,  load average: 3.08, 2.94, 1.82
	Linux old-k8s-version-157431 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e789126519a5fba28b3a3c5eb1f7ef498dc984e45339c930be3db3b18b1ce283] <==
	I1126 20:19:46.687560       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1126 20:19:46.687802       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1126 20:19:46.688014       1 main.go:148] setting mtu 1500 for CNI 
	I1126 20:19:46.688029       1 main.go:178] kindnetd IP family: "ipv4"
	I1126 20:19:46.688047       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-26T20:19:46Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1126 20:19:46.889179       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1126 20:19:46.889539       1 controller.go:381] "Waiting for informer caches to sync"
	I1126 20:19:46.889557       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1126 20:19:46.889690       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1126 20:19:47.389969       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1126 20:19:47.389992       1 metrics.go:72] Registering metrics
	I1126 20:19:47.390036       1 controller.go:711] "Syncing nftables rules"
	I1126 20:19:56.891656       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1126 20:19:56.891716       1 main.go:301] handling current node
	I1126 20:20:06.889517       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1126 20:20:06.889547       1 main.go:301] handling current node
	
	
	==> kube-apiserver [140b58a86cdf23b40ab65e1bc576c48c87ad3ebb0d540a7178f074897c4d74e4] <==
	I1126 20:19:28.502765       1 aggregator.go:166] initial CRD sync complete...
	I1126 20:19:28.502772       1 autoregister_controller.go:141] Starting autoregister controller
	I1126 20:19:28.502776       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1126 20:19:28.502782       1 cache.go:39] Caches are synced for autoregister controller
	I1126 20:19:28.502809       1 shared_informer.go:318] Caches are synced for configmaps
	I1126 20:19:28.503819       1 controller.go:624] quota admission added evaluator for: namespaces
	I1126 20:19:28.504102       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1126 20:19:28.504119       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1126 20:19:28.506567       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1126 20:19:28.697191       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1126 20:19:29.406862       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1126 20:19:29.411618       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1126 20:19:29.411633       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1126 20:19:29.765717       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1126 20:19:29.796723       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1126 20:19:29.914519       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1126 20:19:29.919642       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1126 20:19:29.920410       1 controller.go:624] quota admission added evaluator for: endpoints
	I1126 20:19:29.923949       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1126 20:19:30.444112       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1126 20:19:31.131795       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1126 20:19:31.142024       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1126 20:19:31.151326       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1126 20:19:43.449542       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1126 20:19:44.099303       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [83905e74dc6821bcb00d2cc1d22b83cfb45cca5ec89b4be48bc1897f0b17fae6] <==
	I1126 20:19:43.399391       1 shared_informer.go:318] Caches are synced for resource quota
	I1126 20:19:43.406410       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I1126 20:19:43.452449       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1126 20:19:43.814123       1 shared_informer.go:318] Caches are synced for garbage collector
	I1126 20:19:43.895829       1 shared_informer.go:318] Caches are synced for garbage collector
	I1126 20:19:43.895854       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1126 20:19:44.110256       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-qqdfx"
	I1126 20:19:44.112074       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-zlg4b"
	I1126 20:19:44.302510       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-hpfqk"
	I1126 20:19:44.307315       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-jhrhx"
	I1126 20:19:44.313115       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="860.665773ms"
	I1126 20:19:44.318708       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="5.537147ms"
	I1126 20:19:44.318832       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="80.453µs"
	I1126 20:19:44.328894       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="96.596µs"
	I1126 20:19:44.978055       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1126 20:19:44.984434       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-hpfqk"
	I1126 20:19:44.990571       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="14.156612ms"
	I1126 20:19:44.996317       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="5.697661ms"
	I1126 20:19:44.996431       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="43.189µs"
	I1126 20:19:56.999691       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="88.693µs"
	I1126 20:19:57.009046       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="199.466µs"
	I1126 20:19:58.289840       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="128.786µs"
	I1126 20:19:58.296899       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1126 20:19:58.311427       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="8.670574ms"
	I1126 20:19:58.311667       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="89.181µs"
	
	
	==> kube-proxy [33b9c62a3d8bf41d5683824a858e105b99af14fd8b138bec6bdd0328a6f8328c] <==
	I1126 20:19:44.493425       1 server_others.go:69] "Using iptables proxy"
	I1126 20:19:44.502361       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1126 20:19:44.521430       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1126 20:19:44.523612       1 server_others.go:152] "Using iptables Proxier"
	I1126 20:19:44.523636       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1126 20:19:44.523642       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1126 20:19:44.523674       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1126 20:19:44.523926       1 server.go:846] "Version info" version="v1.28.0"
	I1126 20:19:44.523938       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 20:19:44.524981       1 config.go:315] "Starting node config controller"
	I1126 20:19:44.525023       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1126 20:19:44.525199       1 config.go:97] "Starting endpoint slice config controller"
	I1126 20:19:44.525224       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1126 20:19:44.525249       1 config.go:188] "Starting service config controller"
	I1126 20:19:44.525264       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1126 20:19:44.625521       1 shared_informer.go:318] Caches are synced for service config
	I1126 20:19:44.625529       1 shared_informer.go:318] Caches are synced for node config
	I1126 20:19:44.625561       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [4e5fb4baf77c50392f4a83ffec11e025ed671b1c8c6d291107cacf2d15dcd515] <==
	W1126 20:19:28.455140       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1126 20:19:28.455225       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1126 20:19:28.455348       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1126 20:19:28.455380       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1126 20:19:28.455505       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1126 20:19:28.455532       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1126 20:19:29.360617       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1126 20:19:29.360655       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1126 20:19:29.364748       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1126 20:19:29.364778       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1126 20:19:29.421094       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1126 20:19:29.421141       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1126 20:19:29.427219       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1126 20:19:29.427251       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1126 20:19:29.509327       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1126 20:19:29.509440       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1126 20:19:29.526832       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1126 20:19:29.526871       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1126 20:19:29.578374       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1126 20:19:29.578403       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1126 20:19:29.614556       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1126 20:19:29.614584       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1126 20:19:29.654178       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1126 20:19:29.654211       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I1126 20:19:31.950080       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 26 20:19:43 old-k8s-version-157431 kubelet[1400]: I1126 20:19:43.332074    1400 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 26 20:19:44 old-k8s-version-157431 kubelet[1400]: I1126 20:19:44.115860    1400 topology_manager.go:215] "Topology Admit Handler" podUID="896fd93b-917a-42b9-92db-283923830743" podNamespace="kube-system" podName="kube-proxy-qqdfx"
	Nov 26 20:19:44 old-k8s-version-157431 kubelet[1400]: I1126 20:19:44.117448    1400 topology_manager.go:215] "Topology Admit Handler" podUID="9e7b6449-704d-42a1-863d-ec678f485d78" podNamespace="kube-system" podName="kindnet-zlg4b"
	Nov 26 20:19:44 old-k8s-version-157431 kubelet[1400]: I1126 20:19:44.195805    1400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l57r8\" (UniqueName: \"kubernetes.io/projected/9e7b6449-704d-42a1-863d-ec678f485d78-kube-api-access-l57r8\") pod \"kindnet-zlg4b\" (UID: \"9e7b6449-704d-42a1-863d-ec678f485d78\") " pod="kube-system/kindnet-zlg4b"
	Nov 26 20:19:44 old-k8s-version-157431 kubelet[1400]: I1126 20:19:44.195848    1400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/896fd93b-917a-42b9-92db-283923830743-xtables-lock\") pod \"kube-proxy-qqdfx\" (UID: \"896fd93b-917a-42b9-92db-283923830743\") " pod="kube-system/kube-proxy-qqdfx"
	Nov 26 20:19:44 old-k8s-version-157431 kubelet[1400]: I1126 20:19:44.195869    1400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/896fd93b-917a-42b9-92db-283923830743-lib-modules\") pod \"kube-proxy-qqdfx\" (UID: \"896fd93b-917a-42b9-92db-283923830743\") " pod="kube-system/kube-proxy-qqdfx"
	Nov 26 20:19:44 old-k8s-version-157431 kubelet[1400]: I1126 20:19:44.195889    1400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fmx6\" (UniqueName: \"kubernetes.io/projected/896fd93b-917a-42b9-92db-283923830743-kube-api-access-4fmx6\") pod \"kube-proxy-qqdfx\" (UID: \"896fd93b-917a-42b9-92db-283923830743\") " pod="kube-system/kube-proxy-qqdfx"
	Nov 26 20:19:44 old-k8s-version-157431 kubelet[1400]: I1126 20:19:44.195913    1400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9e7b6449-704d-42a1-863d-ec678f485d78-xtables-lock\") pod \"kindnet-zlg4b\" (UID: \"9e7b6449-704d-42a1-863d-ec678f485d78\") " pod="kube-system/kindnet-zlg4b"
	Nov 26 20:19:44 old-k8s-version-157431 kubelet[1400]: I1126 20:19:44.195970    1400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/896fd93b-917a-42b9-92db-283923830743-kube-proxy\") pod \"kube-proxy-qqdfx\" (UID: \"896fd93b-917a-42b9-92db-283923830743\") " pod="kube-system/kube-proxy-qqdfx"
	Nov 26 20:19:44 old-k8s-version-157431 kubelet[1400]: I1126 20:19:44.196062    1400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9e7b6449-704d-42a1-863d-ec678f485d78-lib-modules\") pod \"kindnet-zlg4b\" (UID: \"9e7b6449-704d-42a1-863d-ec678f485d78\") " pod="kube-system/kindnet-zlg4b"
	Nov 26 20:19:44 old-k8s-version-157431 kubelet[1400]: I1126 20:19:44.196127    1400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/9e7b6449-704d-42a1-863d-ec678f485d78-cni-cfg\") pod \"kindnet-zlg4b\" (UID: \"9e7b6449-704d-42a1-863d-ec678f485d78\") " pod="kube-system/kindnet-zlg4b"
	Nov 26 20:19:45 old-k8s-version-157431 kubelet[1400]: I1126 20:19:45.261219    1400 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-qqdfx" podStartSLOduration=1.261153346 podCreationTimestamp="2025-11-26 20:19:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 20:19:45.260931359 +0000 UTC m=+14.151066523" watchObservedRunningTime="2025-11-26 20:19:45.261153346 +0000 UTC m=+14.151288510"
	Nov 26 20:19:47 old-k8s-version-157431 kubelet[1400]: I1126 20:19:47.264997    1400 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-zlg4b" podStartSLOduration=1.173627212 podCreationTimestamp="2025-11-26 20:19:44 +0000 UTC" firstStartedPulling="2025-11-26 20:19:44.427562952 +0000 UTC m=+13.317698107" lastFinishedPulling="2025-11-26 20:19:46.518883511 +0000 UTC m=+15.409018669" observedRunningTime="2025-11-26 20:19:47.264808843 +0000 UTC m=+16.154944006" watchObservedRunningTime="2025-11-26 20:19:47.264947774 +0000 UTC m=+16.155082935"
	Nov 26 20:19:56 old-k8s-version-157431 kubelet[1400]: I1126 20:19:56.980412    1400 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 26 20:19:57 old-k8s-version-157431 kubelet[1400]: I1126 20:19:57.000027    1400 topology_manager.go:215] "Topology Admit Handler" podUID="483a52cf-1d0a-4b51-b9b1-d986b07fa545" podNamespace="kube-system" podName="coredns-5dd5756b68-jhrhx"
	Nov 26 20:19:57 old-k8s-version-157431 kubelet[1400]: I1126 20:19:57.001256    1400 topology_manager.go:215] "Topology Admit Handler" podUID="f6d6f6e0-74c6-4708-abff-c18f6962424e" podNamespace="kube-system" podName="storage-provisioner"
	Nov 26 20:19:57 old-k8s-version-157431 kubelet[1400]: I1126 20:19:57.085666    1400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/483a52cf-1d0a-4b51-b9b1-d986b07fa545-config-volume\") pod \"coredns-5dd5756b68-jhrhx\" (UID: \"483a52cf-1d0a-4b51-b9b1-d986b07fa545\") " pod="kube-system/coredns-5dd5756b68-jhrhx"
	Nov 26 20:19:57 old-k8s-version-157431 kubelet[1400]: I1126 20:19:57.085717    1400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f6d6f6e0-74c6-4708-abff-c18f6962424e-tmp\") pod \"storage-provisioner\" (UID: \"f6d6f6e0-74c6-4708-abff-c18f6962424e\") " pod="kube-system/storage-provisioner"
	Nov 26 20:19:57 old-k8s-version-157431 kubelet[1400]: I1126 20:19:57.085751    1400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz6lh\" (UniqueName: \"kubernetes.io/projected/f6d6f6e0-74c6-4708-abff-c18f6962424e-kube-api-access-fz6lh\") pod \"storage-provisioner\" (UID: \"f6d6f6e0-74c6-4708-abff-c18f6962424e\") " pod="kube-system/storage-provisioner"
	Nov 26 20:19:57 old-k8s-version-157431 kubelet[1400]: I1126 20:19:57.085872    1400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm9hj\" (UniqueName: \"kubernetes.io/projected/483a52cf-1d0a-4b51-b9b1-d986b07fa545-kube-api-access-lm9hj\") pod \"coredns-5dd5756b68-jhrhx\" (UID: \"483a52cf-1d0a-4b51-b9b1-d986b07fa545\") " pod="kube-system/coredns-5dd5756b68-jhrhx"
	Nov 26 20:19:58 old-k8s-version-157431 kubelet[1400]: I1126 20:19:58.302983    1400 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-jhrhx" podStartSLOduration=14.302902026 podCreationTimestamp="2025-11-26 20:19:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 20:19:58.289777394 +0000 UTC m=+27.179912590" watchObservedRunningTime="2025-11-26 20:19:58.302902026 +0000 UTC m=+27.193037189"
	Nov 26 20:20:00 old-k8s-version-157431 kubelet[1400]: I1126 20:20:00.167612    1400 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.167531579 podCreationTimestamp="2025-11-26 20:19:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 20:19:58.313889309 +0000 UTC m=+27.204024472" watchObservedRunningTime="2025-11-26 20:20:00.167531579 +0000 UTC m=+29.057666744"
	Nov 26 20:20:00 old-k8s-version-157431 kubelet[1400]: I1126 20:20:00.167927    1400 topology_manager.go:215] "Topology Admit Handler" podUID="d6c41f35-cc7b-423c-b8e2-76531e7a8b3b" podNamespace="default" podName="busybox"
	Nov 26 20:20:00 old-k8s-version-157431 kubelet[1400]: I1126 20:20:00.202669    1400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgqzr\" (UniqueName: \"kubernetes.io/projected/d6c41f35-cc7b-423c-b8e2-76531e7a8b3b-kube-api-access-kgqzr\") pod \"busybox\" (UID: \"d6c41f35-cc7b-423c-b8e2-76531e7a8b3b\") " pod="default/busybox"
	Nov 26 20:20:01 old-k8s-version-157431 kubelet[1400]: I1126 20:20:01.292277    1400 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=0.553852723 podCreationTimestamp="2025-11-26 20:20:00 +0000 UTC" firstStartedPulling="2025-11-26 20:20:00.486975573 +0000 UTC m=+29.377110728" lastFinishedPulling="2025-11-26 20:20:01.225345978 +0000 UTC m=+30.115481129" observedRunningTime="2025-11-26 20:20:01.291891693 +0000 UTC m=+30.182026857" watchObservedRunningTime="2025-11-26 20:20:01.292223124 +0000 UTC m=+30.182358287"
	
	
	==> storage-provisioner [17c6e96e8ac8bbe859e31b96d709f78524a65aaa073852625ef0602496105763] <==
	I1126 20:19:57.357784       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1126 20:19:57.366407       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1126 20:19:57.366443       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1126 20:19:57.372428       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1126 20:19:57.372502       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d6320788-37e4-4724-9ee8-7a22321466b2", APIVersion:"v1", ResourceVersion:"394", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-157431_1ace5bdd-bb5b-4de1-8546-21a78c36ad57 became leader
	I1126 20:19:57.372592       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-157431_1ace5bdd-bb5b-4de1-8546-21a78c36ad57!
	I1126 20:19:57.473434       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-157431_1ace5bdd-bb5b-4de1-8546-21a78c36ad57!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-157431 -n old-k8s-version-157431
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-157431 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (5.77s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-157431 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-157431 --alsologtostderr -v=1: exit status 80 (2.064789657s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-157431 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1126 20:21:04.264122  253453 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:21:04.264387  253453 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:21:04.264396  253453 out.go:374] Setting ErrFile to fd 2...
	I1126 20:21:04.264401  253453 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:21:04.264649  253453 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-10722/.minikube/bin
	I1126 20:21:04.264923  253453 out.go:368] Setting JSON to false
	I1126 20:21:04.264941  253453 mustload.go:66] Loading cluster: old-k8s-version-157431
	I1126 20:21:04.265436  253453 config.go:182] Loaded profile config "old-k8s-version-157431": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1126 20:21:04.265917  253453 cli_runner.go:164] Run: docker container inspect old-k8s-version-157431 --format={{.State.Status}}
	I1126 20:21:04.284039  253453 host.go:66] Checking if "old-k8s-version-157431" exists ...
	I1126 20:21:04.284265  253453 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:21:04.339684  253453 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:76 OomKillDisable:false NGoroutines:85 SystemTime:2025-11-26 20:21:04.328781082 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1126 20:21:04.340272  253453 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-157431 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1126 20:21:04.341943  253453 out.go:179] * Pausing node old-k8s-version-157431 ... 
	I1126 20:21:04.342910  253453 host.go:66] Checking if "old-k8s-version-157431" exists ...
	I1126 20:21:04.343174  253453 ssh_runner.go:195] Run: systemctl --version
	I1126 20:21:04.343247  253453 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-157431
	I1126 20:21:04.360096  253453 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/old-k8s-version-157431/id_rsa Username:docker}
	I1126 20:21:04.455603  253453 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:21:04.467217  253453 pause.go:52] kubelet running: true
	I1126 20:21:04.467269  253453 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1126 20:21:04.618794  253453 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1126 20:21:04.618864  253453 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1126 20:21:04.682536  253453 cri.go:89] found id: "6c6f323935b9bb8fa8c4aab620011fbe3b488cd55cf74ff43d5189c233b9a31a"
	I1126 20:21:04.682562  253453 cri.go:89] found id: "accee1a5d908d0156207ba0fe0bd3d00a19a4cdf28557c3848fd5a55e1fa67e5"
	I1126 20:21:04.682575  253453 cri.go:89] found id: "39908a2bff30f053ae4e66e2b1f1ed4846ab95ad78050fe44589cb96f7e2ddb9"
	I1126 20:21:04.682581  253453 cri.go:89] found id: "16fda5da153ba465a321b8058a258b00de264660a2e64bf42345ae13e96d6044"
	I1126 20:21:04.682586  253453 cri.go:89] found id: "a504f533180fa3660ae26e19df70e0f8b5e648c9eac05e818e5a34d99dfa556d"
	I1126 20:21:04.682591  253453 cri.go:89] found id: "d8d8479be421b12e08c13d54e96b2828f60bf10165cebecd5a7fe6990720f66d"
	I1126 20:21:04.682595  253453 cri.go:89] found id: "9646e408ccc61c8cc3bf687319bc9e134650a818bbe9f04715d5a74998506dc6"
	I1126 20:21:04.682600  253453 cri.go:89] found id: "abbeedf1745d559c442cf4064712a5660d93918ca1d77d450979a8d7cb48fd5b"
	I1126 20:21:04.682605  253453 cri.go:89] found id: "55649a60515f7f70c4cad7df626752d394192d9164d58b7e1c46a43be8398fa6"
	I1126 20:21:04.682617  253453 cri.go:89] found id: "d0ff3b383353d86e05f922635c797f096102d6ebe358f4b279634a688ecbe2fb"
	I1126 20:21:04.682624  253453 cri.go:89] found id: ""
	I1126 20:21:04.682665  253453 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 20:21:04.693750  253453 retry.go:31] will retry after 222.71779ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:21:04Z" level=error msg="open /run/runc: no such file or directory"
	I1126 20:21:04.917206  253453 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:21:04.929693  253453 pause.go:52] kubelet running: false
	I1126 20:21:04.929749  253453 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1126 20:21:05.070312  253453 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1126 20:21:05.070394  253453 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1126 20:21:05.139447  253453 cri.go:89] found id: "6c6f323935b9bb8fa8c4aab620011fbe3b488cd55cf74ff43d5189c233b9a31a"
	I1126 20:21:05.139476  253453 cri.go:89] found id: "accee1a5d908d0156207ba0fe0bd3d00a19a4cdf28557c3848fd5a55e1fa67e5"
	I1126 20:21:05.139482  253453 cri.go:89] found id: "39908a2bff30f053ae4e66e2b1f1ed4846ab95ad78050fe44589cb96f7e2ddb9"
	I1126 20:21:05.139486  253453 cri.go:89] found id: "16fda5da153ba465a321b8058a258b00de264660a2e64bf42345ae13e96d6044"
	I1126 20:21:05.139491  253453 cri.go:89] found id: "a504f533180fa3660ae26e19df70e0f8b5e648c9eac05e818e5a34d99dfa556d"
	I1126 20:21:05.139496  253453 cri.go:89] found id: "d8d8479be421b12e08c13d54e96b2828f60bf10165cebecd5a7fe6990720f66d"
	I1126 20:21:05.139499  253453 cri.go:89] found id: "9646e408ccc61c8cc3bf687319bc9e134650a818bbe9f04715d5a74998506dc6"
	I1126 20:21:05.139502  253453 cri.go:89] found id: "abbeedf1745d559c442cf4064712a5660d93918ca1d77d450979a8d7cb48fd5b"
	I1126 20:21:05.139504  253453 cri.go:89] found id: "55649a60515f7f70c4cad7df626752d394192d9164d58b7e1c46a43be8398fa6"
	I1126 20:21:05.139510  253453 cri.go:89] found id: "d0ff3b383353d86e05f922635c797f096102d6ebe358f4b279634a688ecbe2fb"
	I1126 20:21:05.139513  253453 cri.go:89] found id: ""
	I1126 20:21:05.139543  253453 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 20:21:05.151487  253453 retry.go:31] will retry after 362.137228ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:21:05Z" level=error msg="open /run/runc: no such file or directory"
	I1126 20:21:05.514024  253453 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:21:05.528230  253453 pause.go:52] kubelet running: false
	I1126 20:21:05.528277  253453 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1126 20:21:05.675351  253453 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1126 20:21:05.675424  253453 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1126 20:21:05.744316  253453 cri.go:89] found id: "6c6f323935b9bb8fa8c4aab620011fbe3b488cd55cf74ff43d5189c233b9a31a"
	I1126 20:21:05.744340  253453 cri.go:89] found id: "accee1a5d908d0156207ba0fe0bd3d00a19a4cdf28557c3848fd5a55e1fa67e5"
	I1126 20:21:05.744346  253453 cri.go:89] found id: "39908a2bff30f053ae4e66e2b1f1ed4846ab95ad78050fe44589cb96f7e2ddb9"
	I1126 20:21:05.744351  253453 cri.go:89] found id: "16fda5da153ba465a321b8058a258b00de264660a2e64bf42345ae13e96d6044"
	I1126 20:21:05.744356  253453 cri.go:89] found id: "a504f533180fa3660ae26e19df70e0f8b5e648c9eac05e818e5a34d99dfa556d"
	I1126 20:21:05.744361  253453 cri.go:89] found id: "d8d8479be421b12e08c13d54e96b2828f60bf10165cebecd5a7fe6990720f66d"
	I1126 20:21:05.744366  253453 cri.go:89] found id: "9646e408ccc61c8cc3bf687319bc9e134650a818bbe9f04715d5a74998506dc6"
	I1126 20:21:05.744370  253453 cri.go:89] found id: "abbeedf1745d559c442cf4064712a5660d93918ca1d77d450979a8d7cb48fd5b"
	I1126 20:21:05.744375  253453 cri.go:89] found id: "55649a60515f7f70c4cad7df626752d394192d9164d58b7e1c46a43be8398fa6"
	I1126 20:21:05.744401  253453 cri.go:89] found id: "d0ff3b383353d86e05f922635c797f096102d6ebe358f4b279634a688ecbe2fb"
	I1126 20:21:05.744410  253453 cri.go:89] found id: ""
	I1126 20:21:05.744468  253453 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 20:21:05.756010  253453 retry.go:31] will retry after 282.907116ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:21:05Z" level=error msg="open /run/runc: no such file or directory"
	I1126 20:21:06.039543  253453 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:21:06.052322  253453 pause.go:52] kubelet running: false
	I1126 20:21:06.052404  253453 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1126 20:21:06.188708  253453 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1126 20:21:06.188784  253453 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1126 20:21:06.252282  253453 cri.go:89] found id: "6c6f323935b9bb8fa8c4aab620011fbe3b488cd55cf74ff43d5189c233b9a31a"
	I1126 20:21:06.252301  253453 cri.go:89] found id: "accee1a5d908d0156207ba0fe0bd3d00a19a4cdf28557c3848fd5a55e1fa67e5"
	I1126 20:21:06.252306  253453 cri.go:89] found id: "39908a2bff30f053ae4e66e2b1f1ed4846ab95ad78050fe44589cb96f7e2ddb9"
	I1126 20:21:06.252309  253453 cri.go:89] found id: "16fda5da153ba465a321b8058a258b00de264660a2e64bf42345ae13e96d6044"
	I1126 20:21:06.252311  253453 cri.go:89] found id: "a504f533180fa3660ae26e19df70e0f8b5e648c9eac05e818e5a34d99dfa556d"
	I1126 20:21:06.252315  253453 cri.go:89] found id: "d8d8479be421b12e08c13d54e96b2828f60bf10165cebecd5a7fe6990720f66d"
	I1126 20:21:06.252317  253453 cri.go:89] found id: "9646e408ccc61c8cc3bf687319bc9e134650a818bbe9f04715d5a74998506dc6"
	I1126 20:21:06.252320  253453 cri.go:89] found id: "abbeedf1745d559c442cf4064712a5660d93918ca1d77d450979a8d7cb48fd5b"
	I1126 20:21:06.252322  253453 cri.go:89] found id: "55649a60515f7f70c4cad7df626752d394192d9164d58b7e1c46a43be8398fa6"
	I1126 20:21:06.252329  253453 cri.go:89] found id: "d0ff3b383353d86e05f922635c797f096102d6ebe358f4b279634a688ecbe2fb"
	I1126 20:21:06.252341  253453 cri.go:89] found id: ""
	I1126 20:21:06.252387  253453 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 20:21:06.265677  253453 out.go:203] 
	W1126 20:21:06.266925  253453 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:21:06Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:21:06Z" level=error msg="open /run/runc: no such file or directory"
	
	W1126 20:21:06.266945  253453 out.go:285] * 
	* 
	W1126 20:21:06.270913  253453 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1126 20:21:06.272056  253453 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-157431 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-157431
helpers_test.go:243: (dbg) docker inspect old-k8s-version-157431:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "77bb37b66fd7027025a22d6f5aea0b07be6fffaf6fe8b99efa3c3d7655886caf",
	        "Created": "2025-11-26T20:19:16.110022495Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 248374,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-26T20:20:27.036811807Z",
	            "FinishedAt": "2025-11-26T20:20:26.182757108Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/77bb37b66fd7027025a22d6f5aea0b07be6fffaf6fe8b99efa3c3d7655886caf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/77bb37b66fd7027025a22d6f5aea0b07be6fffaf6fe8b99efa3c3d7655886caf/hostname",
	        "HostsPath": "/var/lib/docker/containers/77bb37b66fd7027025a22d6f5aea0b07be6fffaf6fe8b99efa3c3d7655886caf/hosts",
	        "LogPath": "/var/lib/docker/containers/77bb37b66fd7027025a22d6f5aea0b07be6fffaf6fe8b99efa3c3d7655886caf/77bb37b66fd7027025a22d6f5aea0b07be6fffaf6fe8b99efa3c3d7655886caf-json.log",
	        "Name": "/old-k8s-version-157431",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-157431:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-157431",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "77bb37b66fd7027025a22d6f5aea0b07be6fffaf6fe8b99efa3c3d7655886caf",
	                "LowerDir": "/var/lib/docker/overlay2/3c973fc8b80ac671221ba557ff7dfd317e56a412bc6c5655554bd65755b08efc-init/diff:/var/lib/docker/overlay2/1661d775281cb7314a654558ee2c4e5880ffafb3a27a085032449cd60a68753a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3c973fc8b80ac671221ba557ff7dfd317e56a412bc6c5655554bd65755b08efc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3c973fc8b80ac671221ba557ff7dfd317e56a412bc6c5655554bd65755b08efc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3c973fc8b80ac671221ba557ff7dfd317e56a412bc6c5655554bd65755b08efc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-157431",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-157431/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-157431",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-157431",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-157431",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "7a6192aeaf4e67c796ad61fee172ea0757828251dfb01a56f7aa51e613593c11",
	            "SandboxKey": "/var/run/docker/netns/7a6192aeaf4e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33058"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33059"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33062"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33060"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33061"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-157431": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5d4f1dd69a726aa0138274371b25ff8174904f4f402419e4752de500c743a887",
	                    "EndpointID": "0b30ba07803e4894731f3e73b76fc587179d5ca8d57350c4dad694b61f719e32",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "22:23:97:a5:5b:ee",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-157431",
	                        "77bb37b66fd7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-157431 -n old-k8s-version-157431
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-157431 -n old-k8s-version-157431: exit status 2 (329.238153ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-157431 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-157431 logs -n 25: (1.011553203s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cilium-825702 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ cilium-825702          │ jenkins │ v1.37.0 │ 26 Nov 25 20:18 UTC │                     │
	│ ssh     │ -p cilium-825702 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-825702          │ jenkins │ v1.37.0 │ 26 Nov 25 20:18 UTC │                     │
	│ ssh     │ -p cilium-825702 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-825702          │ jenkins │ v1.37.0 │ 26 Nov 25 20:18 UTC │                     │
	│ ssh     │ -p cilium-825702 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-825702          │ jenkins │ v1.37.0 │ 26 Nov 25 20:18 UTC │                     │
	│ ssh     │ -p cilium-825702 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-825702          │ jenkins │ v1.37.0 │ 26 Nov 25 20:18 UTC │                     │
	│ ssh     │ -p cilium-825702 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-825702          │ jenkins │ v1.37.0 │ 26 Nov 25 20:18 UTC │                     │
	│ ssh     │ -p cilium-825702 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-825702          │ jenkins │ v1.37.0 │ 26 Nov 25 20:18 UTC │                     │
	│ ssh     │ -p cilium-825702 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-825702          │ jenkins │ v1.37.0 │ 26 Nov 25 20:18 UTC │                     │
	│ ssh     │ -p cilium-825702 sudo containerd config dump                                                                                                                                                                                                  │ cilium-825702          │ jenkins │ v1.37.0 │ 26 Nov 25 20:18 UTC │                     │
	│ ssh     │ -p cilium-825702 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-825702          │ jenkins │ v1.37.0 │ 26 Nov 25 20:18 UTC │                     │
	│ ssh     │ -p cilium-825702 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-825702          │ jenkins │ v1.37.0 │ 26 Nov 25 20:18 UTC │                     │
	│ ssh     │ -p cilium-825702 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-825702          │ jenkins │ v1.37.0 │ 26 Nov 25 20:18 UTC │                     │
	│ ssh     │ -p cilium-825702 sudo crio config                                                                                                                                                                                                             │ cilium-825702          │ jenkins │ v1.37.0 │ 26 Nov 25 20:18 UTC │                     │
	│ delete  │ -p cilium-825702                                                                                                                                                                                                                              │ cilium-825702          │ jenkins │ v1.37.0 │ 26 Nov 25 20:18 UTC │ 26 Nov 25 20:18 UTC │
	│ start   │ -p cert-options-706331 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-706331    │ jenkins │ v1.37.0 │ 26 Nov 25 20:18 UTC │ 26 Nov 25 20:19 UTC │
	│ ssh     │ cert-options-706331 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-706331    │ jenkins │ v1.37.0 │ 26 Nov 25 20:19 UTC │ 26 Nov 25 20:19 UTC │
	│ ssh     │ -p cert-options-706331 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-706331    │ jenkins │ v1.37.0 │ 26 Nov 25 20:19 UTC │ 26 Nov 25 20:19 UTC │
	│ delete  │ -p cert-options-706331                                                                                                                                                                                                                        │ cert-options-706331    │ jenkins │ v1.37.0 │ 26 Nov 25 20:19 UTC │ 26 Nov 25 20:19 UTC │
	│ start   │ -p old-k8s-version-157431 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-157431 │ jenkins │ v1.37.0 │ 26 Nov 25 20:19 UTC │ 26 Nov 25 20:19 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-157431 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-157431 │ jenkins │ v1.37.0 │ 26 Nov 25 20:20 UTC │                     │
	│ stop    │ -p old-k8s-version-157431 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-157431 │ jenkins │ v1.37.0 │ 26 Nov 25 20:20 UTC │ 26 Nov 25 20:20 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-157431 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-157431 │ jenkins │ v1.37.0 │ 26 Nov 25 20:20 UTC │ 26 Nov 25 20:20 UTC │
	│ start   │ -p old-k8s-version-157431 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-157431 │ jenkins │ v1.37.0 │ 26 Nov 25 20:20 UTC │ 26 Nov 25 20:20 UTC │
	│ image   │ old-k8s-version-157431 image list --format=json                                                                                                                                                                                               │ old-k8s-version-157431 │ jenkins │ v1.37.0 │ 26 Nov 25 20:21 UTC │ 26 Nov 25 20:21 UTC │
	│ pause   │ -p old-k8s-version-157431 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-157431 │ jenkins │ v1.37.0 │ 26 Nov 25 20:21 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/26 20:20:26
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1126 20:20:26.818437  248170 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:20:26.818551  248170 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:20:26.818560  248170 out.go:374] Setting ErrFile to fd 2...
	I1126 20:20:26.818564  248170 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:20:26.818750  248170 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-10722/.minikube/bin
	I1126 20:20:26.819148  248170 out.go:368] Setting JSON to false
	I1126 20:20:26.820318  248170 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3777,"bootTime":1764184650,"procs":369,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1126 20:20:26.820373  248170 start.go:143] virtualization: kvm guest
	I1126 20:20:26.822194  248170 out.go:179] * [old-k8s-version-157431] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1126 20:20:26.823308  248170 notify.go:221] Checking for updates...
	I1126 20:20:26.823332  248170 out.go:179]   - MINIKUBE_LOCATION=21974
	I1126 20:20:26.824359  248170 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1126 20:20:26.825754  248170 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21974-10722/kubeconfig
	I1126 20:20:26.826897  248170 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-10722/.minikube
	I1126 20:20:26.828116  248170 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1126 20:20:26.829080  248170 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1126 20:20:26.830529  248170 config.go:182] Loaded profile config "old-k8s-version-157431": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1126 20:20:26.832158  248170 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1126 20:20:26.833246  248170 driver.go:422] Setting default libvirt URI to qemu:///system
	I1126 20:20:26.857357  248170 docker.go:124] docker version: linux-29.0.4:Docker Engine - Community
	I1126 20:20:26.857470  248170 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:20:26.911898  248170 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-26 20:20:26.901890798 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1126 20:20:26.911998  248170 docker.go:319] overlay module found
	I1126 20:20:26.913416  248170 out.go:179] * Using the docker driver based on existing profile
	I1126 20:20:26.914430  248170 start.go:309] selected driver: docker
	I1126 20:20:26.914440  248170 start.go:927] validating driver "docker" against &{Name:old-k8s-version-157431 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-157431 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:20:26.914530  248170 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1126 20:20:26.915062  248170 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:20:26.970248  248170 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-26 20:20:26.961278035 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1126 20:20:26.970546  248170 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1126 20:20:26.970576  248170 cni.go:84] Creating CNI manager for ""
	I1126 20:20:26.970628  248170 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:20:26.970664  248170 start.go:353] cluster config:
	{Name:old-k8s-version-157431 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-157431 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:20:26.972021  248170 out.go:179] * Starting "old-k8s-version-157431" primary control-plane node in "old-k8s-version-157431" cluster
	I1126 20:20:26.973050  248170 cache.go:134] Beginning downloading kic base image for docker with crio
	I1126 20:20:26.974201  248170 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1126 20:20:26.975251  248170 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1126 20:20:26.975284  248170 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21974-10722/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1126 20:20:26.975304  248170 cache.go:65] Caching tarball of preloaded images
	I1126 20:20:26.975344  248170 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1126 20:20:26.975393  248170 preload.go:238] Found /home/jenkins/minikube-integration/21974-10722/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1126 20:20:26.975404  248170 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1126 20:20:26.975539  248170 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/old-k8s-version-157431/config.json ...
	I1126 20:20:26.994764  248170 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1126 20:20:26.994783  248170 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1126 20:20:26.994797  248170 cache.go:243] Successfully downloaded all kic artifacts
	I1126 20:20:26.994821  248170 start.go:360] acquireMachinesLock for old-k8s-version-157431: {Name:mkea810daa6c92d5318c72561874a0f25d5c921b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1126 20:20:26.994879  248170 start.go:364] duration metric: took 34.603µs to acquireMachinesLock for "old-k8s-version-157431"
	I1126 20:20:26.994895  248170 start.go:96] Skipping create...Using existing machine configuration
	I1126 20:20:26.994902  248170 fix.go:54] fixHost starting: 
	I1126 20:20:26.995087  248170 cli_runner.go:164] Run: docker container inspect old-k8s-version-157431 --format={{.State.Status}}
	I1126 20:20:27.011470  248170 fix.go:112] recreateIfNeeded on old-k8s-version-157431: state=Stopped err=<nil>
	W1126 20:20:27.011495  248170 fix.go:138] unexpected machine state, will restart: <nil>
	I1126 20:20:23.990794  216504 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1126 20:20:23.991224  216504 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1126 20:20:23.991272  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:20:23.991315  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:20:24.024690  216504 cri.go:89] found id: "904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823"
	I1126 20:20:24.024717  216504 cri.go:89] found id: ""
	I1126 20:20:24.024727  216504 logs.go:282] 1 containers: [904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823]
	I1126 20:20:24.024783  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:24.028511  216504 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:20:24.028559  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:20:24.060663  216504 cri.go:89] found id: "e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280"
	I1126 20:20:24.060684  216504 cri.go:89] found id: ""
	I1126 20:20:24.060693  216504 logs.go:282] 1 containers: [e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280]
	I1126 20:20:24.060743  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:24.063992  216504 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:20:24.064039  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:20:24.095759  216504 cri.go:89] found id: ""
	I1126 20:20:24.095776  216504 logs.go:282] 0 containers: []
	W1126 20:20:24.095782  216504 logs.go:284] No container was found matching "coredns"
	I1126 20:20:24.095792  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:20:24.095842  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:20:24.127784  216504 cri.go:89] found id: "b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915"
	I1126 20:20:24.127807  216504 cri.go:89] found id: ""
	I1126 20:20:24.127816  216504 logs.go:282] 1 containers: [b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915]
	I1126 20:20:24.127864  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:24.131286  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:20:24.131336  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:20:24.163992  216504 cri.go:89] found id: ""
	I1126 20:20:24.164015  216504 logs.go:282] 0 containers: []
	W1126 20:20:24.164021  216504 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:20:24.164027  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:20:24.164074  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:20:24.195914  216504 cri.go:89] found id: "583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e"
	I1126 20:20:24.195933  216504 cri.go:89] found id: ""
	I1126 20:20:24.195944  216504 logs.go:282] 1 containers: [583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e]
	I1126 20:20:24.196003  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:24.199424  216504 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:20:24.199502  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:20:24.231398  216504 cri.go:89] found id: ""
	I1126 20:20:24.231420  216504 logs.go:282] 0 containers: []
	W1126 20:20:24.231427  216504 logs.go:284] No container was found matching "kindnet"
	I1126 20:20:24.231433  216504 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:20:24.231500  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:20:24.263657  216504 cri.go:89] found id: ""
	I1126 20:20:24.263682  216504 logs.go:282] 0 containers: []
	W1126 20:20:24.263692  216504 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:20:24.263708  216504 logs.go:123] Gathering logs for dmesg ...
	I1126 20:20:24.263718  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:20:24.279006  216504 logs.go:123] Gathering logs for kube-scheduler [b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915] ...
	I1126 20:20:24.279027  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915"
	I1126 20:20:24.345423  216504 logs.go:123] Gathering logs for kube-controller-manager [583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e] ...
	I1126 20:20:24.345447  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e"
	I1126 20:20:24.377909  216504 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:20:24.377932  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:20:24.419509  216504 logs.go:123] Gathering logs for kubelet ...
	I1126 20:20:24.419535  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:20:24.510033  216504 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:20:24.510063  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:20:24.568362  216504 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:20:24.568387  216504 logs.go:123] Gathering logs for kube-apiserver [904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823] ...
	I1126 20:20:24.568402  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823"
	I1126 20:20:24.604439  216504 logs.go:123] Gathering logs for etcd [e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280] ...
	I1126 20:20:24.604474  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280"
	I1126 20:20:24.636747  216504 logs.go:123] Gathering logs for container status ...
	I1126 20:20:24.636772  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:20:27.172522  216504 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1126 20:20:27.172878  216504 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1126 20:20:27.172933  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:20:27.172987  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:20:27.210365  216504 cri.go:89] found id: "904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823"
	I1126 20:20:27.210400  216504 cri.go:89] found id: ""
	I1126 20:20:27.210417  216504 logs.go:282] 1 containers: [904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823]
	I1126 20:20:27.210486  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:27.214451  216504 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:20:27.214542  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:20:27.249896  216504 cri.go:89] found id: "e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280"
	I1126 20:20:27.249917  216504 cri.go:89] found id: ""
	I1126 20:20:27.249927  216504 logs.go:282] 1 containers: [e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280]
	I1126 20:20:27.249975  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:27.253630  216504 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:20:27.253699  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:20:27.290543  216504 cri.go:89] found id: ""
	I1126 20:20:27.290569  216504 logs.go:282] 0 containers: []
	W1126 20:20:27.290577  216504 logs.go:284] No container was found matching "coredns"
	I1126 20:20:27.290585  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:20:27.290636  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:20:27.329319  216504 cri.go:89] found id: "b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915"
	I1126 20:20:27.329344  216504 cri.go:89] found id: ""
	I1126 20:20:27.329354  216504 logs.go:282] 1 containers: [b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915]
	I1126 20:20:27.329415  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:27.333334  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:20:27.333385  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:20:27.369006  216504 cri.go:89] found id: ""
	I1126 20:20:27.369029  216504 logs.go:282] 0 containers: []
	W1126 20:20:27.369039  216504 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:20:27.369046  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:20:27.369090  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:20:27.404435  216504 cri.go:89] found id: "583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e"
	I1126 20:20:27.404478  216504 cri.go:89] found id: ""
	I1126 20:20:27.404488  216504 logs.go:282] 1 containers: [583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e]
	I1126 20:20:27.404545  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:27.408036  216504 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:20:27.408086  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:20:27.447986  216504 cri.go:89] found id: ""
	I1126 20:20:27.448011  216504 logs.go:282] 0 containers: []
	W1126 20:20:27.448020  216504 logs.go:284] No container was found matching "kindnet"
	I1126 20:20:27.448028  216504 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:20:27.448089  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:20:27.481175  216504 cri.go:89] found id: ""
	I1126 20:20:27.481195  216504 logs.go:282] 0 containers: []
	W1126 20:20:27.481202  216504 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:20:27.481215  216504 logs.go:123] Gathering logs for container status ...
	I1126 20:20:27.481225  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:20:27.525560  216504 logs.go:123] Gathering logs for kubelet ...
	I1126 20:20:27.525584  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:20:27.630135  216504 logs.go:123] Gathering logs for kube-apiserver [904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823] ...
	I1126 20:20:27.630168  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823"
	I1126 20:20:27.669450  216504 logs.go:123] Gathering logs for etcd [e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280] ...
	I1126 20:20:27.669490  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280"
	I1126 20:20:27.702147  216504 logs.go:123] Gathering logs for kube-controller-manager [583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e] ...
	I1126 20:20:27.702177  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e"
	I1126 20:20:27.735479  216504 logs.go:123] Gathering logs for dmesg ...
	I1126 20:20:27.735508  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:20:27.749952  216504 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:20:27.749979  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:20:27.808579  216504 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:20:27.808598  216504 logs.go:123] Gathering logs for kube-scheduler [b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915] ...
	I1126 20:20:27.808610  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915"
	I1126 20:20:27.877986  216504 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:20:27.878021  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:20:25.870921  211567 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1126 20:20:25.871272  211567 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1126 20:20:25.871318  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:20:25.871362  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:20:25.897016  211567 cri.go:89] found id: "b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf"
	I1126 20:20:25.897032  211567 cri.go:89] found id: ""
	I1126 20:20:25.897039  211567 logs.go:282] 1 containers: [b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf]
	I1126 20:20:25.897079  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:20:25.900793  211567 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:20:25.900848  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:20:25.925270  211567 cri.go:89] found id: ""
	I1126 20:20:25.925295  211567 logs.go:282] 0 containers: []
	W1126 20:20:25.925302  211567 logs.go:284] No container was found matching "etcd"
	I1126 20:20:25.925307  211567 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:20:25.925356  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:20:25.949769  211567 cri.go:89] found id: ""
	I1126 20:20:25.949791  211567 logs.go:282] 0 containers: []
	W1126 20:20:25.949801  211567 logs.go:284] No container was found matching "coredns"
	I1126 20:20:25.949807  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:20:25.949854  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:20:25.974141  211567 cri.go:89] found id: "b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3"
	I1126 20:20:25.974163  211567 cri.go:89] found id: ""
	I1126 20:20:25.974173  211567 logs.go:282] 1 containers: [b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3]
	I1126 20:20:25.974217  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:20:25.977749  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:20:25.977793  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:20:26.001378  211567 cri.go:89] found id: ""
	I1126 20:20:26.001399  211567 logs.go:282] 0 containers: []
	W1126 20:20:26.001410  211567 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:20:26.001416  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:20:26.001475  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:20:26.025142  211567 cri.go:89] found id: "cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e"
	I1126 20:20:26.025156  211567 cri.go:89] found id: ""
	I1126 20:20:26.025168  211567 logs.go:282] 1 containers: [cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e]
	I1126 20:20:26.025211  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:20:26.028851  211567 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:20:26.028906  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:20:26.053121  211567 cri.go:89] found id: ""
	I1126 20:20:26.053141  211567 logs.go:282] 0 containers: []
	W1126 20:20:26.053150  211567 logs.go:284] No container was found matching "kindnet"
	I1126 20:20:26.053157  211567 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:20:26.053200  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:20:26.078227  211567 cri.go:89] found id: ""
	I1126 20:20:26.078243  211567 logs.go:282] 0 containers: []
	W1126 20:20:26.078250  211567 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:20:26.078257  211567 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:20:26.078266  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:20:26.131017  211567 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:20:26.131039  211567 logs.go:123] Gathering logs for kube-apiserver [b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf] ...
	I1126 20:20:26.131053  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf"
	I1126 20:20:26.164080  211567 logs.go:123] Gathering logs for kube-scheduler [b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3] ...
	I1126 20:20:26.164110  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3"
	I1126 20:20:26.215069  211567 logs.go:123] Gathering logs for kube-controller-manager [cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e] ...
	I1126 20:20:26.215099  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e"
	I1126 20:20:26.244701  211567 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:20:26.244727  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:20:26.290317  211567 logs.go:123] Gathering logs for container status ...
	I1126 20:20:26.290342  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:20:26.319494  211567 logs.go:123] Gathering logs for kubelet ...
	I1126 20:20:26.319517  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:20:26.408302  211567 logs.go:123] Gathering logs for dmesg ...
	I1126 20:20:26.408327  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:20:28.922377  211567 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1126 20:20:28.922764  211567 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1126 20:20:28.922828  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:20:28.922889  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:20:28.948750  211567 cri.go:89] found id: "b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf"
	I1126 20:20:28.948770  211567 cri.go:89] found id: ""
	I1126 20:20:28.948781  211567 logs.go:282] 1 containers: [b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf]
	I1126 20:20:28.948837  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:20:28.952682  211567 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:20:28.952739  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:20:28.976587  211567 cri.go:89] found id: ""
	I1126 20:20:28.976611  211567 logs.go:282] 0 containers: []
	W1126 20:20:28.976620  211567 logs.go:284] No container was found matching "etcd"
	I1126 20:20:28.976627  211567 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:20:28.976679  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:20:29.000078  211567 cri.go:89] found id: ""
	I1126 20:20:29.000099  211567 logs.go:282] 0 containers: []
	W1126 20:20:29.000109  211567 logs.go:284] No container was found matching "coredns"
	I1126 20:20:29.000116  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:20:29.000160  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:20:29.024173  211567 cri.go:89] found id: "b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3"
	I1126 20:20:29.024190  211567 cri.go:89] found id: ""
	I1126 20:20:29.024197  211567 logs.go:282] 1 containers: [b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3]
	I1126 20:20:29.024238  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:20:29.027691  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:20:29.027749  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:20:29.053216  211567 cri.go:89] found id: ""
	I1126 20:20:29.053239  211567 logs.go:282] 0 containers: []
	W1126 20:20:29.053252  211567 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:20:29.053257  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:20:29.053310  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:20:29.077339  211567 cri.go:89] found id: "cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e"
	I1126 20:20:29.077360  211567 cri.go:89] found id: ""
	I1126 20:20:29.077369  211567 logs.go:282] 1 containers: [cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e]
	I1126 20:20:29.077420  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:20:29.080947  211567 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:20:29.081000  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:20:29.104636  211567 cri.go:89] found id: ""
	I1126 20:20:29.104657  211567 logs.go:282] 0 containers: []
	W1126 20:20:29.104663  211567 logs.go:284] No container was found matching "kindnet"
	I1126 20:20:29.104668  211567 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:20:29.104707  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:20:29.129948  211567 cri.go:89] found id: ""
	I1126 20:20:29.129965  211567 logs.go:282] 0 containers: []
	W1126 20:20:29.129972  211567 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:20:29.129980  211567 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:20:29.129988  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:20:29.174507  211567 logs.go:123] Gathering logs for container status ...
	I1126 20:20:29.174528  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:20:29.202589  211567 logs.go:123] Gathering logs for kubelet ...
	I1126 20:20:29.202609  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:20:29.285682  211567 logs.go:123] Gathering logs for dmesg ...
	I1126 20:20:29.285706  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:20:29.298921  211567 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:20:29.298945  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:20:29.350940  211567 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:20:29.350962  211567 logs.go:123] Gathering logs for kube-apiserver [b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf] ...
	I1126 20:20:29.350974  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf"
	I1126 20:20:29.381555  211567 logs.go:123] Gathering logs for kube-scheduler [b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3] ...
	I1126 20:20:29.381579  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3"
	I1126 20:20:29.429852  211567 logs.go:123] Gathering logs for kube-controller-manager [cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e] ...
	I1126 20:20:29.429878  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e"
	I1126 20:20:27.013047  248170 out.go:252] * Restarting existing docker container for "old-k8s-version-157431" ...
	I1126 20:20:27.013106  248170 cli_runner.go:164] Run: docker start old-k8s-version-157431
	I1126 20:20:27.291654  248170 cli_runner.go:164] Run: docker container inspect old-k8s-version-157431 --format={{.State.Status}}
	I1126 20:20:27.310140  248170 kic.go:430] container "old-k8s-version-157431" state is running.
	I1126 20:20:27.310641  248170 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-157431
	I1126 20:20:27.330099  248170 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/old-k8s-version-157431/config.json ...
	I1126 20:20:27.330338  248170 machine.go:94] provisionDockerMachine start ...
	I1126 20:20:27.330424  248170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-157431
	I1126 20:20:27.349666  248170 main.go:143] libmachine: Using SSH client type: native
	I1126 20:20:27.349899  248170 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1126 20:20:27.349911  248170 main.go:143] libmachine: About to run SSH command:
	hostname
	I1126 20:20:27.350525  248170 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47394->127.0.0.1:33058: read: connection reset by peer
	I1126 20:20:30.490730  248170 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-157431
	
	I1126 20:20:30.490757  248170 ubuntu.go:182] provisioning hostname "old-k8s-version-157431"
	I1126 20:20:30.490824  248170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-157431
	I1126 20:20:30.509658  248170 main.go:143] libmachine: Using SSH client type: native
	I1126 20:20:30.509921  248170 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1126 20:20:30.509942  248170 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-157431 && echo "old-k8s-version-157431" | sudo tee /etc/hostname
	I1126 20:20:30.662884  248170 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-157431
	
	I1126 20:20:30.662962  248170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-157431
	I1126 20:20:30.682002  248170 main.go:143] libmachine: Using SSH client type: native
	I1126 20:20:30.682227  248170 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1126 20:20:30.682245  248170 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-157431' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-157431/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-157431' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1126 20:20:30.823849  248170 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1126 20:20:30.823876  248170 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21974-10722/.minikube CaCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21974-10722/.minikube}
	I1126 20:20:30.823925  248170 ubuntu.go:190] setting up certificates
	I1126 20:20:30.823943  248170 provision.go:84] configureAuth start
	I1126 20:20:30.824000  248170 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-157431
	I1126 20:20:30.841953  248170 provision.go:143] copyHostCerts
	I1126 20:20:30.842006  248170 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-10722/.minikube/key.pem, removing ...
	I1126 20:20:30.842015  248170 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-10722/.minikube/key.pem
	I1126 20:20:30.842085  248170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/key.pem (1675 bytes)
	I1126 20:20:30.842196  248170 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-10722/.minikube/ca.pem, removing ...
	I1126 20:20:30.842207  248170 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-10722/.minikube/ca.pem
	I1126 20:20:30.842249  248170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/ca.pem (1078 bytes)
	I1126 20:20:30.842318  248170 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-10722/.minikube/cert.pem, removing ...
	I1126 20:20:30.842329  248170 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-10722/.minikube/cert.pem
	I1126 20:20:30.842365  248170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/cert.pem (1123 bytes)
	I1126 20:20:30.842429  248170 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-157431 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-157431]
	I1126 20:20:30.964785  248170 provision.go:177] copyRemoteCerts
	I1126 20:20:30.964842  248170 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1126 20:20:30.964872  248170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-157431
	I1126 20:20:30.983629  248170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/old-k8s-version-157431/id_rsa Username:docker}
	I1126 20:20:31.084260  248170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1126 20:20:31.100853  248170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1126 20:20:31.117240  248170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1126 20:20:31.133209  248170 provision.go:87] duration metric: took 309.257526ms to configureAuth
	I1126 20:20:31.133229  248170 ubuntu.go:206] setting minikube options for container-runtime
	I1126 20:20:31.133370  248170 config.go:182] Loaded profile config "old-k8s-version-157431": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1126 20:20:31.133447  248170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-157431
	I1126 20:20:31.151201  248170 main.go:143] libmachine: Using SSH client type: native
	I1126 20:20:31.151439  248170 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1126 20:20:31.151471  248170 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1126 20:20:31.455505  248170 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1126 20:20:31.455534  248170 machine.go:97] duration metric: took 4.125179201s to provisionDockerMachine
	I1126 20:20:31.455549  248170 start.go:293] postStartSetup for "old-k8s-version-157431" (driver="docker")
	I1126 20:20:31.455562  248170 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1126 20:20:31.455633  248170 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1126 20:20:31.455676  248170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-157431
	I1126 20:20:31.473882  248170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/old-k8s-version-157431/id_rsa Username:docker}
	I1126 20:20:31.570817  248170 ssh_runner.go:195] Run: cat /etc/os-release
	I1126 20:20:31.574103  248170 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1126 20:20:31.574147  248170 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1126 20:20:31.574156  248170 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-10722/.minikube/addons for local assets ...
	I1126 20:20:31.574199  248170 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-10722/.minikube/files for local assets ...
	I1126 20:20:31.574265  248170 filesync.go:149] local asset: /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem -> 142582.pem in /etc/ssl/certs
	I1126 20:20:31.574344  248170 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1126 20:20:31.581304  248170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem --> /etc/ssl/certs/142582.pem (1708 bytes)
	I1126 20:20:31.598131  248170 start.go:296] duration metric: took 142.56904ms for postStartSetup
	I1126 20:20:31.598197  248170 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1126 20:20:31.598237  248170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-157431
	I1126 20:20:31.616517  248170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/old-k8s-version-157431/id_rsa Username:docker}
	I1126 20:20:31.709913  248170 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1126 20:20:31.714258  248170 fix.go:56] duration metric: took 4.719351205s for fixHost
	I1126 20:20:31.714280  248170 start.go:83] releasing machines lock for "old-k8s-version-157431", held for 4.719390513s
	I1126 20:20:31.714382  248170 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-157431
	I1126 20:20:31.731738  248170 ssh_runner.go:195] Run: cat /version.json
	I1126 20:20:31.731802  248170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-157431
	I1126 20:20:31.731829  248170 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1126 20:20:31.731897  248170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-157431
	I1126 20:20:31.749673  248170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/old-k8s-version-157431/id_rsa Username:docker}
	I1126 20:20:31.750007  248170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/old-k8s-version-157431/id_rsa Username:docker}
	I1126 20:20:31.900881  248170 ssh_runner.go:195] Run: systemctl --version
	I1126 20:20:31.906913  248170 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1126 20:20:31.940752  248170 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1126 20:20:31.944950  248170 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1126 20:20:31.945002  248170 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1126 20:20:31.952392  248170 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1126 20:20:31.952407  248170 start.go:496] detecting cgroup driver to use...
	I1126 20:20:31.952432  248170 detect.go:190] detected "systemd" cgroup driver on host os
	I1126 20:20:31.952499  248170 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1126 20:20:31.967165  248170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1126 20:20:31.979205  248170 docker.go:218] disabling cri-docker service (if available) ...
	I1126 20:20:31.979254  248170 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1126 20:20:31.993300  248170 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1126 20:20:32.006014  248170 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1126 20:20:32.093726  248170 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1126 20:20:32.176418  248170 docker.go:234] disabling docker service ...
	I1126 20:20:32.176490  248170 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1126 20:20:32.192162  248170 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1126 20:20:32.203912  248170 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1126 20:20:32.287262  248170 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1126 20:20:32.384389  248170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1126 20:20:32.396965  248170 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1126 20:20:32.411369  248170 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1126 20:20:32.411427  248170 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:20:32.419960  248170 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1126 20:20:32.420016  248170 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:20:32.428418  248170 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:20:32.436761  248170 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:20:32.446252  248170 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1126 20:20:32.453785  248170 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:20:32.461942  248170 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:20:32.469776  248170 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:20:32.477996  248170 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1126 20:20:32.485125  248170 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1126 20:20:32.493012  248170 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:20:32.572909  248170 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1126 20:20:32.706653  248170 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1126 20:20:32.706704  248170 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1126 20:20:32.710486  248170 start.go:564] Will wait 60s for crictl version
	I1126 20:20:32.710540  248170 ssh_runner.go:195] Run: which crictl
	I1126 20:20:32.713975  248170 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1126 20:20:32.738237  248170 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1126 20:20:32.738295  248170 ssh_runner.go:195] Run: crio --version
	I1126 20:20:32.765122  248170 ssh_runner.go:195] Run: crio --version
	I1126 20:20:32.793081  248170 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.2 ...
	I1126 20:20:30.422400  216504 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1126 20:20:30.422751  216504 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1126 20:20:30.422799  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:20:30.422849  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:20:30.456109  216504 cri.go:89] found id: "904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823"
	I1126 20:20:30.456136  216504 cri.go:89] found id: ""
	I1126 20:20:30.456146  216504 logs.go:282] 1 containers: [904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823]
	I1126 20:20:30.456196  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:30.459577  216504 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:20:30.459627  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:20:30.492790  216504 cri.go:89] found id: "e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280"
	I1126 20:20:30.492811  216504 cri.go:89] found id: ""
	I1126 20:20:30.492820  216504 logs.go:282] 1 containers: [e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280]
	I1126 20:20:30.492868  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:30.496478  216504 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:20:30.496541  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:20:30.531548  216504 cri.go:89] found id: ""
	I1126 20:20:30.531574  216504 logs.go:282] 0 containers: []
	W1126 20:20:30.531584  216504 logs.go:284] No container was found matching "coredns"
	I1126 20:20:30.531592  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:20:30.531644  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:20:30.565649  216504 cri.go:89] found id: "b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915"
	I1126 20:20:30.565672  216504 cri.go:89] found id: ""
	I1126 20:20:30.565683  216504 logs.go:282] 1 containers: [b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915]
	I1126 20:20:30.565742  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:30.569586  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:20:30.569644  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:20:30.602566  216504 cri.go:89] found id: ""
	I1126 20:20:30.602591  216504 logs.go:282] 0 containers: []
	W1126 20:20:30.602600  216504 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:20:30.602609  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:20:30.602661  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:20:30.634966  216504 cri.go:89] found id: "583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e"
	I1126 20:20:30.634991  216504 cri.go:89] found id: ""
	I1126 20:20:30.635000  216504 logs.go:282] 1 containers: [583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e]
	I1126 20:20:30.635039  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:30.638480  216504 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:20:30.638537  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:20:30.672199  216504 cri.go:89] found id: ""
	I1126 20:20:30.672222  216504 logs.go:282] 0 containers: []
	W1126 20:20:30.672231  216504 logs.go:284] No container was found matching "kindnet"
	I1126 20:20:30.672238  216504 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:20:30.672295  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:20:30.706092  216504 cri.go:89] found id: ""
	I1126 20:20:30.706115  216504 logs.go:282] 0 containers: []
	W1126 20:20:30.706125  216504 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:20:30.706141  216504 logs.go:123] Gathering logs for kube-apiserver [904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823] ...
	I1126 20:20:30.706155  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823"
	I1126 20:20:30.743705  216504 logs.go:123] Gathering logs for etcd [e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280] ...
	I1126 20:20:30.743729  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280"
	I1126 20:20:30.776900  216504 logs.go:123] Gathering logs for kube-scheduler [b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915] ...
	I1126 20:20:30.776929  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915"
	I1126 20:20:30.848539  216504 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:20:30.848563  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:20:30.894402  216504 logs.go:123] Gathering logs for dmesg ...
	I1126 20:20:30.894428  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:20:30.910002  216504 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:20:30.910030  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:20:30.967630  216504 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:20:30.967650  216504 logs.go:123] Gathering logs for kube-controller-manager [583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e] ...
	I1126 20:20:30.967662  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e"
	I1126 20:20:31.002705  216504 logs.go:123] Gathering logs for container status ...
	I1126 20:20:31.002734  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:20:31.039835  216504 logs.go:123] Gathering logs for kubelet ...
	I1126 20:20:31.039863  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:20:32.794207  248170 cli_runner.go:164] Run: docker network inspect old-k8s-version-157431 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1126 20:20:32.811852  248170 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1126 20:20:32.815629  248170 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:20:32.825013  248170 kubeadm.go:884] updating cluster {Name:old-k8s-version-157431 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-157431 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1126 20:20:32.825113  248170 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1126 20:20:32.825160  248170 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:20:32.855891  248170 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:20:32.855919  248170 crio.go:433] Images already preloaded, skipping extraction
	I1126 20:20:32.855964  248170 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:20:32.880233  248170 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:20:32.880251  248170 cache_images.go:86] Images are preloaded, skipping loading
	I1126 20:20:32.880258  248170 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I1126 20:20:32.880341  248170 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-157431 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-157431 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1126 20:20:32.880397  248170 ssh_runner.go:195] Run: crio config
	I1126 20:20:32.923527  248170 cni.go:84] Creating CNI manager for ""
	I1126 20:20:32.923550  248170 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:20:32.923566  248170 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1126 20:20:32.923596  248170 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-157431 NodeName:old-k8s-version-157431 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1126 20:20:32.923748  248170 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-157431"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1126 20:20:32.923825  248170 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1126 20:20:32.931445  248170 binaries.go:51] Found k8s binaries, skipping transfer
	I1126 20:20:32.931499  248170 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1126 20:20:32.938558  248170 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1126 20:20:32.949970  248170 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1126 20:20:32.961240  248170 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1126 20:20:32.972576  248170 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1126 20:20:32.975767  248170 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:20:32.984949  248170 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:20:33.064452  248170 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:20:33.091343  248170 certs.go:69] Setting up /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/old-k8s-version-157431 for IP: 192.168.76.2
	I1126 20:20:33.091378  248170 certs.go:195] generating shared ca certs ...
	I1126 20:20:33.091394  248170 certs.go:227] acquiring lock for ca certs: {Name:mk4795cc985d88cb969cdd6a9c35d3c72f02dfea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:20:33.091556  248170 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21974-10722/.minikube/ca.key
	I1126 20:20:33.091622  248170 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.key
	I1126 20:20:33.091635  248170 certs.go:257] generating profile certs ...
	I1126 20:20:33.091741  248170 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/old-k8s-version-157431/client.key
	I1126 20:20:33.091818  248170 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/old-k8s-version-157431/apiserver.key.162086cc
	I1126 20:20:33.091880  248170 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/old-k8s-version-157431/proxy-client.key
	I1126 20:20:33.092015  248170 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/14258.pem (1338 bytes)
	W1126 20:20:33.092057  248170 certs.go:480] ignoring /home/jenkins/minikube-integration/21974-10722/.minikube/certs/14258_empty.pem, impossibly tiny 0 bytes
	I1126 20:20:33.092067  248170 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem (1679 bytes)
	I1126 20:20:33.092096  248170 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem (1078 bytes)
	I1126 20:20:33.092126  248170 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem (1123 bytes)
	I1126 20:20:33.092149  248170 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem (1675 bytes)
	I1126 20:20:33.092193  248170 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem (1708 bytes)
	I1126 20:20:33.092865  248170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1126 20:20:33.110409  248170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1126 20:20:33.127547  248170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1126 20:20:33.144547  248170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1126 20:20:33.163617  248170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/old-k8s-version-157431/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1126 20:20:33.183947  248170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/old-k8s-version-157431/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1126 20:20:33.200265  248170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/old-k8s-version-157431/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1126 20:20:33.216501  248170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/old-k8s-version-157431/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1126 20:20:33.232558  248170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/certs/14258.pem --> /usr/share/ca-certificates/14258.pem (1338 bytes)
	I1126 20:20:33.249003  248170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem --> /usr/share/ca-certificates/142582.pem (1708 bytes)
	I1126 20:20:33.265426  248170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1126 20:20:33.282385  248170 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1126 20:20:33.294216  248170 ssh_runner.go:195] Run: openssl version
	I1126 20:20:33.300181  248170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142582.pem && ln -fs /usr/share/ca-certificates/142582.pem /etc/ssl/certs/142582.pem"
	I1126 20:20:33.307809  248170 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142582.pem
	I1126 20:20:33.311252  248170 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 26 19:41 /usr/share/ca-certificates/142582.pem
	I1126 20:20:33.311299  248170 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142582.pem
	I1126 20:20:33.345009  248170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142582.pem /etc/ssl/certs/3ec20f2e.0"
	I1126 20:20:33.352087  248170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1126 20:20:33.359914  248170 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:20:33.363445  248170 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 26 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:20:33.363539  248170 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:20:33.398869  248170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1126 20:20:33.408555  248170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14258.pem && ln -fs /usr/share/ca-certificates/14258.pem /etc/ssl/certs/14258.pem"
	I1126 20:20:33.416495  248170 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14258.pem
	I1126 20:20:33.419928  248170 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 26 19:41 /usr/share/ca-certificates/14258.pem
	I1126 20:20:33.419965  248170 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14258.pem
	I1126 20:20:33.453474  248170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14258.pem /etc/ssl/certs/51391683.0"
	I1126 20:20:33.461096  248170 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1126 20:20:33.464603  248170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1126 20:20:33.498554  248170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1126 20:20:33.532660  248170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1126 20:20:33.566798  248170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1126 20:20:33.608505  248170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1126 20:20:33.651676  248170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1126 20:20:33.698809  248170 kubeadm.go:401] StartCluster: {Name:old-k8s-version-157431 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-157431 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:20:33.698923  248170 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 20:20:33.699096  248170 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 20:20:33.739772  248170 cri.go:89] found id: "a504f533180fa3660ae26e19df70e0f8b5e648c9eac05e818e5a34d99dfa556d"
	I1126 20:20:33.739816  248170 cri.go:89] found id: "d8d8479be421b12e08c13d54e96b2828f60bf10165cebecd5a7fe6990720f66d"
	I1126 20:20:33.739822  248170 cri.go:89] found id: "9646e408ccc61c8cc3bf687319bc9e134650a818bbe9f04715d5a74998506dc6"
	I1126 20:20:33.739827  248170 cri.go:89] found id: "abbeedf1745d559c442cf4064712a5660d93918ca1d77d450979a8d7cb48fd5b"
	I1126 20:20:33.739840  248170 cri.go:89] found id: ""
	I1126 20:20:33.739911  248170 ssh_runner.go:195] Run: sudo runc list -f json
	W1126 20:20:33.755015  248170 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:20:33Z" level=error msg="open /run/runc: no such file or directory"
	I1126 20:20:33.755092  248170 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1126 20:20:33.764003  248170 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1126 20:20:33.764021  248170 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1126 20:20:33.764074  248170 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1126 20:20:33.773026  248170 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1126 20:20:33.774755  248170 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-157431" does not appear in /home/jenkins/minikube-integration/21974-10722/kubeconfig
	I1126 20:20:33.775386  248170 kubeconfig.go:62] /home/jenkins/minikube-integration/21974-10722/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-157431" cluster setting kubeconfig missing "old-k8s-version-157431" context setting]
	I1126 20:20:33.776273  248170 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/kubeconfig: {Name:mk6fc897a3190afd853d8f239c80394df10dbd39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:20:33.778289  248170 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1126 20:20:33.787041  248170 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1126 20:20:33.787072  248170 kubeadm.go:602] duration metric: took 23.044617ms to restartPrimaryControlPlane
	I1126 20:20:33.787081  248170 kubeadm.go:403] duration metric: took 88.29816ms to StartCluster
	I1126 20:20:33.787096  248170 settings.go:142] acquiring lock: {Name:mkeab3f1dbcfb88a16fda32bff13c520a9a811ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:20:33.787149  248170 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21974-10722/kubeconfig
	I1126 20:20:33.788872  248170 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/kubeconfig: {Name:mk6fc897a3190afd853d8f239c80394df10dbd39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:20:33.789105  248170 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1126 20:20:33.789312  248170 config.go:182] Loaded profile config "old-k8s-version-157431": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1126 20:20:33.789355  248170 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1126 20:20:33.789429  248170 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-157431"
	I1126 20:20:33.789445  248170 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-157431"
	W1126 20:20:33.789453  248170 addons.go:248] addon storage-provisioner should already be in state true
	I1126 20:20:33.789502  248170 host.go:66] Checking if "old-k8s-version-157431" exists ...
	I1126 20:20:33.789627  248170 addons.go:70] Setting dashboard=true in profile "old-k8s-version-157431"
	I1126 20:20:33.789637  248170 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-157431"
	I1126 20:20:33.789654  248170 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-157431"
	I1126 20:20:33.789654  248170 addons.go:239] Setting addon dashboard=true in "old-k8s-version-157431"
	W1126 20:20:33.789664  248170 addons.go:248] addon dashboard should already be in state true
	I1126 20:20:33.789691  248170 host.go:66] Checking if "old-k8s-version-157431" exists ...
	I1126 20:20:33.789982  248170 cli_runner.go:164] Run: docker container inspect old-k8s-version-157431 --format={{.State.Status}}
	I1126 20:20:33.790003  248170 cli_runner.go:164] Run: docker container inspect old-k8s-version-157431 --format={{.State.Status}}
	I1126 20:20:33.790137  248170 cli_runner.go:164] Run: docker container inspect old-k8s-version-157431 --format={{.State.Status}}
	I1126 20:20:33.791581  248170 out.go:179] * Verifying Kubernetes components...
	I1126 20:20:33.792877  248170 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:20:33.819582  248170 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1126 20:20:33.820878  248170 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-157431"
	W1126 20:20:33.820901  248170 addons.go:248] addon default-storageclass should already be in state true
	I1126 20:20:33.820929  248170 host.go:66] Checking if "old-k8s-version-157431" exists ...
	I1126 20:20:33.820884  248170 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1126 20:20:33.821495  248170 cli_runner.go:164] Run: docker container inspect old-k8s-version-157431 --format={{.State.Status}}
	I1126 20:20:33.822088  248170 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:20:33.822107  248170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1126 20:20:33.822157  248170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-157431
	I1126 20:20:33.822208  248170 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1126 20:20:31.955584  211567 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1126 20:20:31.955935  211567 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1126 20:20:31.955980  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:20:31.956017  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:20:31.981839  211567 cri.go:89] found id: "b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf"
	I1126 20:20:31.981853  211567 cri.go:89] found id: ""
	I1126 20:20:31.981860  211567 logs.go:282] 1 containers: [b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf]
	I1126 20:20:31.981910  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:20:31.985349  211567 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:20:31.985405  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:20:32.011443  211567 cri.go:89] found id: ""
	I1126 20:20:32.011479  211567 logs.go:282] 0 containers: []
	W1126 20:20:32.011489  211567 logs.go:284] No container was found matching "etcd"
	I1126 20:20:32.011497  211567 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:20:32.011539  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:20:32.044197  211567 cri.go:89] found id: ""
	I1126 20:20:32.044223  211567 logs.go:282] 0 containers: []
	W1126 20:20:32.044233  211567 logs.go:284] No container was found matching "coredns"
	I1126 20:20:32.044241  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:20:32.044296  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:20:32.070600  211567 cri.go:89] found id: "b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3"
	I1126 20:20:32.070621  211567 cri.go:89] found id: ""
	I1126 20:20:32.070629  211567 logs.go:282] 1 containers: [b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3]
	I1126 20:20:32.070681  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:20:32.074192  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:20:32.074246  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:20:32.099706  211567 cri.go:89] found id: ""
	I1126 20:20:32.099730  211567 logs.go:282] 0 containers: []
	W1126 20:20:32.099740  211567 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:20:32.099747  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:20:32.099799  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:20:32.133396  211567 cri.go:89] found id: "cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e"
	I1126 20:20:32.133415  211567 cri.go:89] found id: ""
	I1126 20:20:32.133423  211567 logs.go:282] 1 containers: [cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e]
	I1126 20:20:32.133478  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:20:32.137016  211567 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:20:32.137074  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:20:32.162767  211567 cri.go:89] found id: ""
	I1126 20:20:32.162792  211567 logs.go:282] 0 containers: []
	W1126 20:20:32.162802  211567 logs.go:284] No container was found matching "kindnet"
	I1126 20:20:32.162809  211567 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:20:32.162858  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:20:32.186715  211567 cri.go:89] found id: ""
	I1126 20:20:32.186737  211567 logs.go:282] 0 containers: []
	W1126 20:20:32.186746  211567 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:20:32.186756  211567 logs.go:123] Gathering logs for container status ...
	I1126 20:20:32.186769  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:20:32.215504  211567 logs.go:123] Gathering logs for kubelet ...
	I1126 20:20:32.215526  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:20:32.304110  211567 logs.go:123] Gathering logs for dmesg ...
	I1126 20:20:32.304134  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:20:32.321643  211567 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:20:32.321676  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:20:32.385782  211567 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:20:32.385806  211567 logs.go:123] Gathering logs for kube-apiserver [b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf] ...
	I1126 20:20:32.385820  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf"
	I1126 20:20:32.417595  211567 logs.go:123] Gathering logs for kube-scheduler [b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3] ...
	I1126 20:20:32.417617  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3"
	I1126 20:20:32.467212  211567 logs.go:123] Gathering logs for kube-controller-manager [cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e] ...
	I1126 20:20:32.467231  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e"
	I1126 20:20:32.494219  211567 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:20:32.494240  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:20:35.044528  211567 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1126 20:20:35.044912  211567 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1126 20:20:35.044958  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:20:35.045017  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:20:35.073438  211567 cri.go:89] found id: "b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf"
	I1126 20:20:35.073481  211567 cri.go:89] found id: ""
	I1126 20:20:35.073490  211567 logs.go:282] 1 containers: [b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf]
	I1126 20:20:35.073535  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:20:35.077572  211567 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:20:35.077628  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:20:35.107121  211567 cri.go:89] found id: ""
	I1126 20:20:35.107143  211567 logs.go:282] 0 containers: []
	W1126 20:20:35.107153  211567 logs.go:284] No container was found matching "etcd"
	I1126 20:20:35.107160  211567 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:20:35.107201  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:20:35.135791  211567 cri.go:89] found id: ""
	I1126 20:20:35.135813  211567 logs.go:282] 0 containers: []
	W1126 20:20:35.135820  211567 logs.go:284] No container was found matching "coredns"
	I1126 20:20:35.135825  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:20:35.135869  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:20:35.163256  211567 cri.go:89] found id: "b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3"
	I1126 20:20:35.163273  211567 cri.go:89] found id: ""
	I1126 20:20:35.163280  211567 logs.go:282] 1 containers: [b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3]
	I1126 20:20:35.163330  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:20:35.167367  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:20:35.167423  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:20:35.195191  211567 cri.go:89] found id: ""
	I1126 20:20:35.195216  211567 logs.go:282] 0 containers: []
	W1126 20:20:35.195226  211567 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:20:35.195234  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:20:35.195289  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:20:35.226830  211567 cri.go:89] found id: "cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e"
	I1126 20:20:35.226852  211567 cri.go:89] found id: ""
	I1126 20:20:35.226862  211567 logs.go:282] 1 containers: [cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e]
	I1126 20:20:35.226925  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:20:35.230857  211567 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:20:35.230915  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:20:35.258294  211567 cri.go:89] found id: ""
	I1126 20:20:35.258321  211567 logs.go:282] 0 containers: []
	W1126 20:20:35.258331  211567 logs.go:284] No container was found matching "kindnet"
	I1126 20:20:35.258338  211567 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:20:35.258391  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:20:35.284947  211567 cri.go:89] found id: ""
	I1126 20:20:35.284971  211567 logs.go:282] 0 containers: []
	W1126 20:20:35.284980  211567 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:20:35.284990  211567 logs.go:123] Gathering logs for dmesg ...
	I1126 20:20:35.285003  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:20:35.306208  211567 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:20:35.306243  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:20:35.361699  211567 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:20:35.361715  211567 logs.go:123] Gathering logs for kube-apiserver [b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf] ...
	I1126 20:20:35.361728  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf"
	I1126 20:20:35.392730  211567 logs.go:123] Gathering logs for kube-scheduler [b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3] ...
	I1126 20:20:35.392755  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3"
	I1126 20:20:35.447705  211567 logs.go:123] Gathering logs for kube-controller-manager [cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e] ...
	I1126 20:20:35.447768  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e"
	I1126 20:20:35.480433  211567 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:20:35.480483  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:20:35.533519  211567 logs.go:123] Gathering logs for container status ...
	I1126 20:20:35.533555  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:20:35.582612  211567 logs.go:123] Gathering logs for kubelet ...
	I1126 20:20:35.582650  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:20:33.823358  248170 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1126 20:20:33.823385  248170 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1126 20:20:33.823439  248170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-157431
	I1126 20:20:33.861792  248170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/old-k8s-version-157431/id_rsa Username:docker}
	I1126 20:20:33.862130  248170 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1126 20:20:33.862145  248170 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1126 20:20:33.862194  248170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-157431
	I1126 20:20:33.863262  248170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/old-k8s-version-157431/id_rsa Username:docker}
	I1126 20:20:33.889227  248170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/old-k8s-version-157431/id_rsa Username:docker}
	I1126 20:20:33.966837  248170 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:20:33.985699  248170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:20:33.986015  248170 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-157431" to be "Ready" ...
	I1126 20:20:33.991356  248170 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1126 20:20:33.991375  248170 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1126 20:20:34.005603  248170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1126 20:20:34.009431  248170 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1126 20:20:34.009452  248170 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1126 20:20:34.025899  248170 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1126 20:20:34.025921  248170 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1126 20:20:34.045079  248170 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1126 20:20:34.045105  248170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1126 20:20:34.060675  248170 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1126 20:20:34.060698  248170 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1126 20:20:34.079295  248170 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1126 20:20:34.079334  248170 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1126 20:20:34.097084  248170 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1126 20:20:34.097107  248170 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1126 20:20:34.113196  248170 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1126 20:20:34.113218  248170 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1126 20:20:34.128736  248170 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1126 20:20:34.128756  248170 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1126 20:20:34.143961  248170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1126 20:20:35.686054  248170 node_ready.go:49] node "old-k8s-version-157431" is "Ready"
	I1126 20:20:35.686089  248170 node_ready.go:38] duration metric: took 1.700046128s for node "old-k8s-version-157431" to be "Ready" ...
	I1126 20:20:35.686105  248170 api_server.go:52] waiting for apiserver process to appear ...
	I1126 20:20:35.686159  248170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:20:36.322793  248170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.337044602s)
	I1126 20:20:36.322840  248170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.31720366s)
	I1126 20:20:36.610694  248170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.466693098s)
	I1126 20:20:36.610741  248170 api_server.go:72] duration metric: took 2.821605769s to wait for apiserver process to appear ...
	I1126 20:20:36.610766  248170 api_server.go:88] waiting for apiserver healthz status ...
	I1126 20:20:36.610841  248170 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1126 20:20:36.611981  248170 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-157431 addons enable metrics-server
	
	I1126 20:20:36.613173  248170 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1126 20:20:36.614200  248170 addons.go:530] duration metric: took 2.824848299s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1126 20:20:36.615523  248170 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1126 20:20:36.615542  248170 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1126 20:20:33.628862  216504 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1126 20:20:33.629231  216504 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1126 20:20:33.629283  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:20:33.629338  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:20:33.672480  216504 cri.go:89] found id: "904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823"
	I1126 20:20:33.672501  216504 cri.go:89] found id: ""
	I1126 20:20:33.672509  216504 logs.go:282] 1 containers: [904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823]
	I1126 20:20:33.672557  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:33.676724  216504 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:20:33.676782  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:20:33.726991  216504 cri.go:89] found id: "e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280"
	I1126 20:20:33.727020  216504 cri.go:89] found id: ""
	I1126 20:20:33.727030  216504 logs.go:282] 1 containers: [e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280]
	I1126 20:20:33.727087  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:33.732587  216504 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:20:33.732649  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:20:33.778747  216504 cri.go:89] found id: ""
	I1126 20:20:33.778769  216504 logs.go:282] 0 containers: []
	W1126 20:20:33.778778  216504 logs.go:284] No container was found matching "coredns"
	I1126 20:20:33.778786  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:20:33.778840  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:20:33.842067  216504 cri.go:89] found id: "b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915"
	I1126 20:20:33.842090  216504 cri.go:89] found id: ""
	I1126 20:20:33.842100  216504 logs.go:282] 1 containers: [b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915]
	I1126 20:20:33.842161  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:33.849118  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:20:33.849185  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:20:33.903924  216504 cri.go:89] found id: ""
	I1126 20:20:33.903954  216504 logs.go:282] 0 containers: []
	W1126 20:20:33.903964  216504 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:20:33.903971  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:20:33.904042  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:20:33.944988  216504 cri.go:89] found id: "583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e"
	I1126 20:20:33.945030  216504 cri.go:89] found id: ""
	I1126 20:20:33.945041  216504 logs.go:282] 1 containers: [583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e]
	I1126 20:20:33.945105  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:33.949184  216504 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:20:33.949243  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:20:33.992661  216504 cri.go:89] found id: ""
	I1126 20:20:33.992685  216504 logs.go:282] 0 containers: []
	W1126 20:20:33.992694  216504 logs.go:284] No container was found matching "kindnet"
	I1126 20:20:33.992701  216504 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:20:33.992750  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:20:34.042140  216504 cri.go:89] found id: ""
	I1126 20:20:34.042166  216504 logs.go:282] 0 containers: []
	W1126 20:20:34.042272  216504 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:20:34.042297  216504 logs.go:123] Gathering logs for container status ...
	I1126 20:20:34.042311  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:20:34.098637  216504 logs.go:123] Gathering logs for kube-apiserver [904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823] ...
	I1126 20:20:34.098670  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823"
	I1126 20:20:34.141122  216504 logs.go:123] Gathering logs for etcd [e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280] ...
	I1126 20:20:34.141147  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280"
	I1126 20:20:34.177226  216504 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:20:34.177256  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:20:34.231601  216504 logs.go:123] Gathering logs for kubelet ...
	I1126 20:20:34.231634  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:20:34.348270  216504 logs.go:123] Gathering logs for dmesg ...
	I1126 20:20:34.348300  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:20:34.364088  216504 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:20:34.364114  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:20:34.431906  216504 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:20:34.431929  216504 logs.go:123] Gathering logs for kube-scheduler [b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915] ...
	I1126 20:20:34.431943  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915"
	I1126 20:20:34.503579  216504 logs.go:123] Gathering logs for kube-controller-manager [583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e] ...
	I1126 20:20:34.503608  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e"
	I1126 20:20:37.040444  216504 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1126 20:20:37.040847  216504 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1126 20:20:37.040903  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:20:37.040961  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:20:37.074285  216504 cri.go:89] found id: "904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823"
	I1126 20:20:37.074305  216504 cri.go:89] found id: ""
	I1126 20:20:37.074315  216504 logs.go:282] 1 containers: [904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823]
	I1126 20:20:37.074356  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:37.078318  216504 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:20:37.078391  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:20:37.112759  216504 cri.go:89] found id: "e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280"
	I1126 20:20:37.112779  216504 cri.go:89] found id: ""
	I1126 20:20:37.112788  216504 logs.go:282] 1 containers: [e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280]
	I1126 20:20:37.112838  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:37.116909  216504 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:20:37.116964  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:20:37.154047  216504 cri.go:89] found id: ""
	I1126 20:20:37.154070  216504 logs.go:282] 0 containers: []
	W1126 20:20:37.154079  216504 logs.go:284] No container was found matching "coredns"
	I1126 20:20:37.154087  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:20:37.154134  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:20:37.187722  216504 cri.go:89] found id: "b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915"
	I1126 20:20:37.187742  216504 cri.go:89] found id: ""
	I1126 20:20:37.187749  216504 logs.go:282] 1 containers: [b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915]
	I1126 20:20:37.187796  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:37.191374  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:20:37.191434  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:20:37.230454  216504 cri.go:89] found id: ""
	I1126 20:20:37.230492  216504 logs.go:282] 0 containers: []
	W1126 20:20:37.230502  216504 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:20:37.230509  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:20:37.230564  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:20:37.263155  216504 cri.go:89] found id: "583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e"
	I1126 20:20:37.263180  216504 cri.go:89] found id: ""
	I1126 20:20:37.263190  216504 logs.go:282] 1 containers: [583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e]
	I1126 20:20:37.263246  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:37.267538  216504 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:20:37.267590  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:20:37.299187  216504 cri.go:89] found id: ""
	I1126 20:20:37.299206  216504 logs.go:282] 0 containers: []
	W1126 20:20:37.299212  216504 logs.go:284] No container was found matching "kindnet"
	I1126 20:20:37.299217  216504 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:20:37.299266  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:20:37.333307  216504 cri.go:89] found id: ""
	I1126 20:20:37.333327  216504 logs.go:282] 0 containers: []
	W1126 20:20:37.333337  216504 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:20:37.333355  216504 logs.go:123] Gathering logs for dmesg ...
	I1126 20:20:37.333372  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:20:37.347946  216504 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:20:37.347966  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:20:37.406097  216504 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:20:37.406117  216504 logs.go:123] Gathering logs for kube-apiserver [904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823] ...
	I1126 20:20:37.406133  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823"
	I1126 20:20:37.441416  216504 logs.go:123] Gathering logs for kube-scheduler [b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915] ...
	I1126 20:20:37.441438  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915"
	I1126 20:20:37.508962  216504 logs.go:123] Gathering logs for kube-controller-manager [583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e] ...
	I1126 20:20:37.508986  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e"
	I1126 20:20:37.541793  216504 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:20:37.541818  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:20:37.584139  216504 logs.go:123] Gathering logs for kubelet ...
	I1126 20:20:37.584165  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:20:37.671823  216504 logs.go:123] Gathering logs for etcd [e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280] ...
	I1126 20:20:37.671846  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280"
	I1126 20:20:37.708177  216504 logs.go:123] Gathering logs for container status ...
	I1126 20:20:37.708201  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:20:38.193908  211567 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1126 20:20:38.194295  211567 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1126 20:20:38.194343  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:20:38.194391  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:20:38.220311  211567 cri.go:89] found id: "b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf"
	I1126 20:20:38.220334  211567 cri.go:89] found id: ""
	I1126 20:20:38.220344  211567 logs.go:282] 1 containers: [b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf]
	I1126 20:20:38.220400  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:20:38.224100  211567 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:20:38.224162  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:20:38.248255  211567 cri.go:89] found id: ""
	I1126 20:20:38.248276  211567 logs.go:282] 0 containers: []
	W1126 20:20:38.248282  211567 logs.go:284] No container was found matching "etcd"
	I1126 20:20:38.248288  211567 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:20:38.248336  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:20:38.273947  211567 cri.go:89] found id: ""
	I1126 20:20:38.273976  211567 logs.go:282] 0 containers: []
	W1126 20:20:38.273983  211567 logs.go:284] No container was found matching "coredns"
	I1126 20:20:38.273991  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:20:38.274045  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:20:38.298131  211567 cri.go:89] found id: "b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3"
	I1126 20:20:38.298150  211567 cri.go:89] found id: ""
	I1126 20:20:38.298159  211567 logs.go:282] 1 containers: [b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3]
	I1126 20:20:38.298211  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:20:38.301689  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:20:38.301745  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:20:38.326529  211567 cri.go:89] found id: ""
	I1126 20:20:38.326546  211567 logs.go:282] 0 containers: []
	W1126 20:20:38.326552  211567 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:20:38.326557  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:20:38.326594  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:20:38.351071  211567 cri.go:89] found id: "cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e"
	I1126 20:20:38.351087  211567 cri.go:89] found id: ""
	I1126 20:20:38.351095  211567 logs.go:282] 1 containers: [cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e]
	I1126 20:20:38.351139  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:20:38.354585  211567 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:20:38.354629  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:20:38.378888  211567 cri.go:89] found id: ""
	I1126 20:20:38.378909  211567 logs.go:282] 0 containers: []
	W1126 20:20:38.378916  211567 logs.go:284] No container was found matching "kindnet"
	I1126 20:20:38.378922  211567 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:20:38.378962  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:20:38.403010  211567 cri.go:89] found id: ""
	I1126 20:20:38.403032  211567 logs.go:282] 0 containers: []
	W1126 20:20:38.403042  211567 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:20:38.403051  211567 logs.go:123] Gathering logs for container status ...
	I1126 20:20:38.403059  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:20:38.430387  211567 logs.go:123] Gathering logs for kubelet ...
	I1126 20:20:38.430407  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:20:38.519735  211567 logs.go:123] Gathering logs for dmesg ...
	I1126 20:20:38.519771  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:20:38.534287  211567 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:20:38.534314  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:20:38.586771  211567 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:20:38.586795  211567 logs.go:123] Gathering logs for kube-apiserver [b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf] ...
	I1126 20:20:38.586810  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf"
	I1126 20:20:38.617599  211567 logs.go:123] Gathering logs for kube-scheduler [b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3] ...
	I1126 20:20:38.617623  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3"
	I1126 20:20:38.667927  211567 logs.go:123] Gathering logs for kube-controller-manager [cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e] ...
	I1126 20:20:38.667949  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e"
	I1126 20:20:38.692943  211567 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:20:38.692967  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:20:37.111297  248170 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1126 20:20:37.115626  248170 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1126 20:20:37.116935  248170 api_server.go:141] control plane version: v1.28.0
	I1126 20:20:37.116959  248170 api_server.go:131] duration metric: took 506.134197ms to wait for apiserver health ...
	I1126 20:20:37.116975  248170 system_pods.go:43] waiting for kube-system pods to appear ...
	I1126 20:20:37.120863  248170 system_pods.go:59] 8 kube-system pods found
	I1126 20:20:37.120900  248170 system_pods.go:61] "coredns-5dd5756b68-jhrhx" [483a52cf-1d0a-4b51-b9b1-d986b07fa545] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:20:37.120908  248170 system_pods.go:61] "etcd-old-k8s-version-157431" [6b5ff917-ffe0-4bd0-bc19-2cbbcb7511c9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1126 20:20:37.120919  248170 system_pods.go:61] "kindnet-zlg4b" [9e7b6449-704d-42a1-863d-ec678f485d78] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1126 20:20:37.120925  248170 system_pods.go:61] "kube-apiserver-old-k8s-version-157431" [4755237d-9665-4f4a-acdb-7aad9f3a685f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1126 20:20:37.120930  248170 system_pods.go:61] "kube-controller-manager-old-k8s-version-157431" [d139f643-1c90-4ccb-9841-d5da46929720] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1126 20:20:37.120938  248170 system_pods.go:61] "kube-proxy-qqdfx" [896fd93b-917a-42b9-92db-283923830743] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1126 20:20:37.120943  248170 system_pods.go:61] "kube-scheduler-old-k8s-version-157431" [54a1cc6d-cdf5-4041-aa7e-7edda8f78380] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1126 20:20:37.120951  248170 system_pods.go:61] "storage-provisioner" [f6d6f6e0-74c6-4708-abff-c18f6962424e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 20:20:37.120959  248170 system_pods.go:74] duration metric: took 3.977406ms to wait for pod list to return data ...
	I1126 20:20:37.120971  248170 default_sa.go:34] waiting for default service account to be created ...
	I1126 20:20:37.122863  248170 default_sa.go:45] found service account: "default"
	I1126 20:20:37.122884  248170 default_sa.go:55] duration metric: took 1.903092ms for default service account to be created ...
	I1126 20:20:37.122894  248170 system_pods.go:116] waiting for k8s-apps to be running ...
	I1126 20:20:37.126064  248170 system_pods.go:86] 8 kube-system pods found
	I1126 20:20:37.126096  248170 system_pods.go:89] "coredns-5dd5756b68-jhrhx" [483a52cf-1d0a-4b51-b9b1-d986b07fa545] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:20:37.126108  248170 system_pods.go:89] "etcd-old-k8s-version-157431" [6b5ff917-ffe0-4bd0-bc19-2cbbcb7511c9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1126 20:20:37.126119  248170 system_pods.go:89] "kindnet-zlg4b" [9e7b6449-704d-42a1-863d-ec678f485d78] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1126 20:20:37.126132  248170 system_pods.go:89] "kube-apiserver-old-k8s-version-157431" [4755237d-9665-4f4a-acdb-7aad9f3a685f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1126 20:20:37.126141  248170 system_pods.go:89] "kube-controller-manager-old-k8s-version-157431" [d139f643-1c90-4ccb-9841-d5da46929720] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1126 20:20:37.126153  248170 system_pods.go:89] "kube-proxy-qqdfx" [896fd93b-917a-42b9-92db-283923830743] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1126 20:20:37.126166  248170 system_pods.go:89] "kube-scheduler-old-k8s-version-157431" [54a1cc6d-cdf5-4041-aa7e-7edda8f78380] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1126 20:20:37.126174  248170 system_pods.go:89] "storage-provisioner" [f6d6f6e0-74c6-4708-abff-c18f6962424e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 20:20:37.126187  248170 system_pods.go:126] duration metric: took 3.281761ms to wait for k8s-apps to be running ...
	I1126 20:20:37.126199  248170 system_svc.go:44] waiting for kubelet service to be running ....
	I1126 20:20:37.126240  248170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:20:37.139851  248170 system_svc.go:56] duration metric: took 13.647733ms WaitForService to wait for kubelet
	I1126 20:20:37.139878  248170 kubeadm.go:587] duration metric: took 3.350740739s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1126 20:20:37.139897  248170 node_conditions.go:102] verifying NodePressure condition ...
	I1126 20:20:37.142153  248170 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1126 20:20:37.142172  248170 node_conditions.go:123] node cpu capacity is 8
	I1126 20:20:37.142186  248170 node_conditions.go:105] duration metric: took 2.27842ms to run NodePressure ...
	I1126 20:20:37.142197  248170 start.go:242] waiting for startup goroutines ...
	I1126 20:20:37.142206  248170 start.go:247] waiting for cluster config update ...
	I1126 20:20:37.142215  248170 start.go:256] writing updated cluster config ...
	I1126 20:20:37.142443  248170 ssh_runner.go:195] Run: rm -f paused
	I1126 20:20:37.146374  248170 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 20:20:37.150624  248170 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-jhrhx" in "kube-system" namespace to be "Ready" or be gone ...
	W1126 20:20:39.156129  248170 pod_ready.go:104] pod "coredns-5dd5756b68-jhrhx" is not "Ready", error: node "old-k8s-version-157431" hosting pod "coredns-5dd5756b68-jhrhx" is not "Ready" (will retry)
	W1126 20:20:41.655352  248170 pod_ready.go:104] pod "coredns-5dd5756b68-jhrhx" is not "Ready", error: node "old-k8s-version-157431" hosting pod "coredns-5dd5756b68-jhrhx" is not "Ready" (will retry)
	I1126 20:20:40.248146  216504 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1126 20:20:41.241494  211567 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	W1126 20:20:44.155289  248170 pod_ready.go:104] pod "coredns-5dd5756b68-jhrhx" is not "Ready", error: node "old-k8s-version-157431" hosting pod "coredns-5dd5756b68-jhrhx" is not "Ready" (will retry)
	I1126 20:20:46.655804  248170 pod_ready.go:94] pod "coredns-5dd5756b68-jhrhx" is "Ready"
	I1126 20:20:46.655828  248170 pod_ready.go:86] duration metric: took 9.50518735s for pod "coredns-5dd5756b68-jhrhx" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:20:46.658492  248170 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-157431" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:20:46.662019  248170 pod_ready.go:94] pod "etcd-old-k8s-version-157431" is "Ready"
	I1126 20:20:46.662035  248170 pod_ready.go:86] duration metric: took 3.526002ms for pod "etcd-old-k8s-version-157431" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:20:46.664348  248170 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-157431" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:20:45.249553  216504 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1126 20:20:45.249625  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:20:45.249694  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:20:45.283521  216504 cri.go:89] found id: "a691af7ac3ed29c9fd271ce5327631b6b984af6e8860263f5f63b06c02387431"
	I1126 20:20:45.283539  216504 cri.go:89] found id: "904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823"
	I1126 20:20:45.283550  216504 cri.go:89] found id: ""
	I1126 20:20:45.283560  216504 logs.go:282] 2 containers: [a691af7ac3ed29c9fd271ce5327631b6b984af6e8860263f5f63b06c02387431 904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823]
	I1126 20:20:45.283612  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:45.287093  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:45.290453  216504 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:20:45.290510  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:20:45.322500  216504 cri.go:89] found id: "e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280"
	I1126 20:20:45.322520  216504 cri.go:89] found id: ""
	I1126 20:20:45.322529  216504 logs.go:282] 1 containers: [e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280]
	I1126 20:20:45.322564  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:45.326000  216504 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:20:45.326054  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:20:45.358652  216504 cri.go:89] found id: ""
	I1126 20:20:45.358676  216504 logs.go:282] 0 containers: []
	W1126 20:20:45.358686  216504 logs.go:284] No container was found matching "coredns"
	I1126 20:20:45.358693  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:20:45.358732  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:20:45.391304  216504 cri.go:89] found id: "b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915"
	I1126 20:20:45.391323  216504 cri.go:89] found id: ""
	I1126 20:20:45.391329  216504 logs.go:282] 1 containers: [b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915]
	I1126 20:20:45.391369  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:45.394901  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:20:45.394961  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:20:45.426890  216504 cri.go:89] found id: ""
	I1126 20:20:45.426912  216504 logs.go:282] 0 containers: []
	W1126 20:20:45.426921  216504 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:20:45.426927  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:20:45.426974  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:20:45.459132  216504 cri.go:89] found id: "583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e"
	I1126 20:20:45.459154  216504 cri.go:89] found id: ""
	I1126 20:20:45.459165  216504 logs.go:282] 1 containers: [583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e]
	I1126 20:20:45.459206  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:45.462602  216504 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:20:45.462650  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:20:45.494221  216504 cri.go:89] found id: ""
	I1126 20:20:45.494240  216504 logs.go:282] 0 containers: []
	W1126 20:20:45.494247  216504 logs.go:284] No container was found matching "kindnet"
	I1126 20:20:45.494252  216504 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:20:45.494294  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:20:45.528363  216504 cri.go:89] found id: ""
	I1126 20:20:45.528384  216504 logs.go:282] 0 containers: []
	W1126 20:20:45.528390  216504 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:20:45.528402  216504 logs.go:123] Gathering logs for dmesg ...
	I1126 20:20:45.528412  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:20:45.543065  216504 logs.go:123] Gathering logs for kube-apiserver [a691af7ac3ed29c9fd271ce5327631b6b984af6e8860263f5f63b06c02387431] ...
	I1126 20:20:45.543086  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a691af7ac3ed29c9fd271ce5327631b6b984af6e8860263f5f63b06c02387431"
	I1126 20:20:45.577358  216504 logs.go:123] Gathering logs for etcd [e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280] ...
	I1126 20:20:45.577383  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280"
	I1126 20:20:45.608558  216504 logs.go:123] Gathering logs for kube-scheduler [b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915] ...
	I1126 20:20:45.608584  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915"
	I1126 20:20:45.679538  216504 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:20:45.679561  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:20:45.723806  216504 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:20:45.723830  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1126 20:20:46.241904  211567 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1126 20:20:46.241954  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:20:46.242008  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:20:46.268289  211567 cri.go:89] found id: "30d851bfffd1c5f2226fc9246c022d3359bd9a3092ddd6e984b665101217c12c"
	I1126 20:20:46.268310  211567 cri.go:89] found id: "b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf"
	I1126 20:20:46.268316  211567 cri.go:89] found id: ""
	I1126 20:20:46.268323  211567 logs.go:282] 2 containers: [30d851bfffd1c5f2226fc9246c022d3359bd9a3092ddd6e984b665101217c12c b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf]
	I1126 20:20:46.268374  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:20:46.272164  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:20:46.275972  211567 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:20:46.276029  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:20:46.302263  211567 cri.go:89] found id: ""
	I1126 20:20:46.302284  211567 logs.go:282] 0 containers: []
	W1126 20:20:46.302290  211567 logs.go:284] No container was found matching "etcd"
	I1126 20:20:46.302296  211567 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:20:46.302333  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:20:46.327276  211567 cri.go:89] found id: ""
	I1126 20:20:46.327294  211567 logs.go:282] 0 containers: []
	W1126 20:20:46.327301  211567 logs.go:284] No container was found matching "coredns"
	I1126 20:20:46.327307  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:20:46.327343  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:20:46.351875  211567 cri.go:89] found id: "b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3"
	I1126 20:20:46.351898  211567 cri.go:89] found id: ""
	I1126 20:20:46.351906  211567 logs.go:282] 1 containers: [b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3]
	I1126 20:20:46.351946  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:20:46.355565  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:20:46.355610  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:20:46.379609  211567 cri.go:89] found id: ""
	I1126 20:20:46.379634  211567 logs.go:282] 0 containers: []
	W1126 20:20:46.379643  211567 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:20:46.379650  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:20:46.379688  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:20:46.403904  211567 cri.go:89] found id: "cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e"
	I1126 20:20:46.403924  211567 cri.go:89] found id: ""
	I1126 20:20:46.403931  211567 logs.go:282] 1 containers: [cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e]
	I1126 20:20:46.403971  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:20:46.407585  211567 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:20:46.407636  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:20:46.433109  211567 cri.go:89] found id: ""
	I1126 20:20:46.433127  211567 logs.go:282] 0 containers: []
	W1126 20:20:46.433133  211567 logs.go:284] No container was found matching "kindnet"
	I1126 20:20:46.433138  211567 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:20:46.433174  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:20:46.457416  211567 cri.go:89] found id: ""
	I1126 20:20:46.457435  211567 logs.go:282] 0 containers: []
	W1126 20:20:46.457441  211567 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:20:46.457469  211567 logs.go:123] Gathering logs for kube-scheduler [b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3] ...
	I1126 20:20:46.457482  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3"
	I1126 20:20:46.505502  211567 logs.go:123] Gathering logs for kube-controller-manager [cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e] ...
	I1126 20:20:46.505527  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e"
	I1126 20:20:46.530301  211567 logs.go:123] Gathering logs for container status ...
	I1126 20:20:46.530323  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:20:46.558232  211567 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:20:46.558254  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:20:48.669420  248170 pod_ready.go:104] pod "kube-apiserver-old-k8s-version-157431" is not "Ready", error: <nil>
	I1126 20:20:49.168771  248170 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-157431" is "Ready"
	I1126 20:20:49.168799  248170 pod_ready.go:86] duration metric: took 2.504432021s for pod "kube-apiserver-old-k8s-version-157431" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:20:49.171171  248170 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-157431" in "kube-system" namespace to be "Ready" or be gone ...
	W1126 20:20:51.177583  248170 pod_ready.go:104] pod "kube-controller-manager-old-k8s-version-157431" is not "Ready", error: <nil>
	I1126 20:20:52.178079  248170 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-157431" is "Ready"
	I1126 20:20:52.178109  248170 pod_ready.go:86] duration metric: took 3.006914887s for pod "kube-controller-manager-old-k8s-version-157431" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:20:52.181483  248170 pod_ready.go:83] waiting for pod "kube-proxy-qqdfx" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:20:52.186183  248170 pod_ready.go:94] pod "kube-proxy-qqdfx" is "Ready"
	I1126 20:20:52.186215  248170 pod_ready.go:86] duration metric: took 4.704469ms for pod "kube-proxy-qqdfx" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:20:52.189303  248170 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-157431" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:20:52.454980  248170 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-157431" is "Ready"
	I1126 20:20:52.455011  248170 pod_ready.go:86] duration metric: took 265.676811ms for pod "kube-scheduler-old-k8s-version-157431" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:20:52.455026  248170 pod_ready.go:40] duration metric: took 15.308615482s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 20:20:52.512132  248170 start.go:625] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1126 20:20:52.515110  248170 out.go:203] 
	W1126 20:20:52.516291  248170 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1126 20:20:52.517564  248170 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1126 20:20:52.518897  248170 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-157431" cluster and "default" namespace by default
	I1126 20:20:55.781512  216504 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.057662128s)
	W1126 20:20:55.781549  216504 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1126 20:20:55.781556  216504 logs.go:123] Gathering logs for kube-apiserver [904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823] ...
	I1126 20:20:55.781568  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823"
	I1126 20:20:55.818257  216504 logs.go:123] Gathering logs for kube-controller-manager [583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e] ...
	I1126 20:20:55.818282  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e"
	I1126 20:20:55.853882  216504 logs.go:123] Gathering logs for container status ...
	I1126 20:20:55.853916  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:20:55.890893  216504 logs.go:123] Gathering logs for kubelet ...
	I1126 20:20:55.890923  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:20:56.612804  211567 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.054527456s)
	W1126 20:20:56.612844  211567 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1126 20:20:56.612856  211567 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:20:56.612869  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:20:56.658594  211567 logs.go:123] Gathering logs for kubelet ...
	I1126 20:20:56.658624  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:20:56.745352  211567 logs.go:123] Gathering logs for dmesg ...
	I1126 20:20:56.745376  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:20:56.758661  211567 logs.go:123] Gathering logs for kube-apiserver [30d851bfffd1c5f2226fc9246c022d3359bd9a3092ddd6e984b665101217c12c] ...
	I1126 20:20:56.758684  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 30d851bfffd1c5f2226fc9246c022d3359bd9a3092ddd6e984b665101217c12c"
	I1126 20:20:56.789078  211567 logs.go:123] Gathering logs for kube-apiserver [b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf] ...
	I1126 20:20:56.789101  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf"
	I1126 20:20:59.319951  211567 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1126 20:21:00.107208  211567 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": read tcp 192.168.103.1:49868->192.168.103.2:8443: read: connection reset by peer
	I1126 20:21:00.107262  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:21:00.107307  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:21:00.133835  211567 cri.go:89] found id: "30d851bfffd1c5f2226fc9246c022d3359bd9a3092ddd6e984b665101217c12c"
	I1126 20:21:00.133854  211567 cri.go:89] found id: "b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf"
	I1126 20:21:00.133859  211567 cri.go:89] found id: ""
	I1126 20:21:00.133866  211567 logs.go:282] 2 containers: [30d851bfffd1c5f2226fc9246c022d3359bd9a3092ddd6e984b665101217c12c b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf]
	I1126 20:21:00.133910  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:21:00.137743  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:21:00.141254  211567 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:21:00.141307  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:21:00.166926  211567 cri.go:89] found id: ""
	I1126 20:21:00.166947  211567 logs.go:282] 0 containers: []
	W1126 20:21:00.166956  211567 logs.go:284] No container was found matching "etcd"
	I1126 20:21:00.166963  211567 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:21:00.167014  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:21:00.193410  211567 cri.go:89] found id: ""
	I1126 20:21:00.193435  211567 logs.go:282] 0 containers: []
	W1126 20:21:00.193443  211567 logs.go:284] No container was found matching "coredns"
	I1126 20:21:00.193451  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:21:00.193513  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:21:00.219254  211567 cri.go:89] found id: "b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3"
	I1126 20:21:00.219280  211567 cri.go:89] found id: ""
	I1126 20:21:00.219290  211567 logs.go:282] 1 containers: [b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3]
	I1126 20:21:00.219334  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:21:00.223080  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:21:00.223148  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:21:00.248004  211567 cri.go:89] found id: ""
	I1126 20:21:00.248028  211567 logs.go:282] 0 containers: []
	W1126 20:21:00.248042  211567 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:21:00.248049  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:21:00.248098  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:21:00.273571  211567 cri.go:89] found id: "43982b98b5aaa319f602167886267f7bf3f21c726ad09aab92936f4816a878f5"
	I1126 20:21:00.273594  211567 cri.go:89] found id: "cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e"
	I1126 20:21:00.273598  211567 cri.go:89] found id: ""
	I1126 20:21:00.273606  211567 logs.go:282] 2 containers: [43982b98b5aaa319f602167886267f7bf3f21c726ad09aab92936f4816a878f5 cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e]
	I1126 20:21:00.273648  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:21:00.277454  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:21:00.280911  211567 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:21:00.280966  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:21:00.306789  211567 cri.go:89] found id: ""
	I1126 20:21:00.306816  211567 logs.go:282] 0 containers: []
	W1126 20:21:00.306825  211567 logs.go:284] No container was found matching "kindnet"
	I1126 20:21:00.306833  211567 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:21:00.306885  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:21:00.333309  211567 cri.go:89] found id: ""
	I1126 20:21:00.333335  211567 logs.go:282] 0 containers: []
	W1126 20:21:00.333344  211567 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:21:00.333360  211567 logs.go:123] Gathering logs for kubelet ...
	I1126 20:21:00.333372  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:21:00.418949  211567 logs.go:123] Gathering logs for dmesg ...
	I1126 20:21:00.418976  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:21:00.433102  211567 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:21:00.433131  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:21:00.486267  211567 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:21:00.486286  211567 logs.go:123] Gathering logs for kube-apiserver [30d851bfffd1c5f2226fc9246c022d3359bd9a3092ddd6e984b665101217c12c] ...
	I1126 20:21:00.486295  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 30d851bfffd1c5f2226fc9246c022d3359bd9a3092ddd6e984b665101217c12c"
	I1126 20:21:00.515988  211567 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:21:00.516010  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:21:00.565124  211567 logs.go:123] Gathering logs for kube-apiserver [b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf] ...
	I1126 20:21:00.565146  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf"
	W1126 20:21:00.589615  211567 logs.go:130] failed kube-apiserver [b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:21:00.587835    6073 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf\": container with ID starting with b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf not found: ID does not exist" containerID="b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf"
	time="2025-11-26T20:21:00Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf\": container with ID starting with b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf not found: ID does not exist"
	 output: 
	** stderr ** 
	E1126 20:21:00.587835    6073 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf\": container with ID starting with b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf not found: ID does not exist" containerID="b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf"
	time="2025-11-26T20:21:00Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf\": container with ID starting with b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf not found: ID does not exist"
	
	** /stderr **
	I1126 20:21:00.589642  211567 logs.go:123] Gathering logs for kube-scheduler [b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3] ...
	I1126 20:21:00.589657  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3"
	I1126 20:21:00.638883  211567 logs.go:123] Gathering logs for kube-controller-manager [43982b98b5aaa319f602167886267f7bf3f21c726ad09aab92936f4816a878f5] ...
	I1126 20:21:00.638909  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 43982b98b5aaa319f602167886267f7bf3f21c726ad09aab92936f4816a878f5"
	I1126 20:21:00.663348  211567 logs.go:123] Gathering logs for kube-controller-manager [cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e] ...
	I1126 20:21:00.663369  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e"
	I1126 20:21:00.689150  211567 logs.go:123] Gathering logs for container status ...
	I1126 20:21:00.689174  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:20:58.481865  216504 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1126 20:20:58.482284  216504 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1126 20:20:58.482341  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:20:58.482402  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:20:58.515489  216504 cri.go:89] found id: "a691af7ac3ed29c9fd271ce5327631b6b984af6e8860263f5f63b06c02387431"
	I1126 20:20:58.515512  216504 cri.go:89] found id: "904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823"
	I1126 20:20:58.515518  216504 cri.go:89] found id: ""
	I1126 20:20:58.515528  216504 logs.go:282] 2 containers: [a691af7ac3ed29c9fd271ce5327631b6b984af6e8860263f5f63b06c02387431 904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823]
	I1126 20:20:58.515594  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:58.519256  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:58.522995  216504 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:20:58.523056  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:20:58.554588  216504 cri.go:89] found id: "e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280"
	I1126 20:20:58.554605  216504 cri.go:89] found id: ""
	I1126 20:20:58.554614  216504 logs.go:282] 1 containers: [e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280]
	I1126 20:20:58.554666  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:58.558013  216504 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:20:58.558064  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:20:58.590485  216504 cri.go:89] found id: ""
	I1126 20:20:58.590507  216504 logs.go:282] 0 containers: []
	W1126 20:20:58.590515  216504 logs.go:284] No container was found matching "coredns"
	I1126 20:20:58.590520  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:20:58.590564  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:20:58.622428  216504 cri.go:89] found id: "b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915"
	I1126 20:20:58.622448  216504 cri.go:89] found id: ""
	I1126 20:20:58.622483  216504 logs.go:282] 1 containers: [b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915]
	I1126 20:20:58.622536  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:58.625879  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:20:58.625937  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:20:58.657015  216504 cri.go:89] found id: ""
	I1126 20:20:58.657039  216504 logs.go:282] 0 containers: []
	W1126 20:20:58.657048  216504 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:20:58.657055  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:20:58.657098  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:20:58.689215  216504 cri.go:89] found id: "4fb35e453473ab894694b4123b0caaba84eb921f83bf9f12a54289a43ef5b354"
	I1126 20:20:58.689235  216504 cri.go:89] found id: "583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e"
	I1126 20:20:58.689241  216504 cri.go:89] found id: ""
	I1126 20:20:58.689250  216504 logs.go:282] 2 containers: [4fb35e453473ab894694b4123b0caaba84eb921f83bf9f12a54289a43ef5b354 583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e]
	I1126 20:20:58.689301  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:58.692709  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:58.695923  216504 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:20:58.695967  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:20:58.727730  216504 cri.go:89] found id: ""
	I1126 20:20:58.727751  216504 logs.go:282] 0 containers: []
	W1126 20:20:58.727761  216504 logs.go:284] No container was found matching "kindnet"
	I1126 20:20:58.727766  216504 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:20:58.727813  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:20:58.760590  216504 cri.go:89] found id: ""
	I1126 20:20:58.760614  216504 logs.go:282] 0 containers: []
	W1126 20:20:58.760624  216504 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:20:58.760635  216504 logs.go:123] Gathering logs for kubelet ...
	I1126 20:20:58.760649  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:20:58.849907  216504 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:20:58.849931  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:20:58.907806  216504 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:20:58.907824  216504 logs.go:123] Gathering logs for kube-apiserver [a691af7ac3ed29c9fd271ce5327631b6b984af6e8860263f5f63b06c02387431] ...
	I1126 20:20:58.907835  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a691af7ac3ed29c9fd271ce5327631b6b984af6e8860263f5f63b06c02387431"
	I1126 20:20:58.945284  216504 logs.go:123] Gathering logs for etcd [e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280] ...
	I1126 20:20:58.945312  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280"
	I1126 20:20:58.978413  216504 logs.go:123] Gathering logs for kube-scheduler [b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915] ...
	I1126 20:20:58.978439  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915"
	I1126 20:20:59.048200  216504 logs.go:123] Gathering logs for kube-controller-manager [583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e] ...
	I1126 20:20:59.048230  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e"
	I1126 20:20:59.080748  216504 logs.go:123] Gathering logs for container status ...
	I1126 20:20:59.080771  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:20:59.119627  216504 logs.go:123] Gathering logs for dmesg ...
	I1126 20:20:59.119653  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:20:59.134100  216504 logs.go:123] Gathering logs for kube-apiserver [904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823] ...
	I1126 20:20:59.134122  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823"
	W1126 20:20:59.166399  216504 logs.go:130] failed kube-apiserver [904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:20:59.164187    6589 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823\": container with ID starting with 904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823 not found: ID does not exist" containerID="904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823"
	time="2025-11-26T20:20:59Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823\": container with ID starting with 904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823 not found: ID does not exist"
	 output: 
	** stderr ** 
	E1126 20:20:59.164187    6589 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823\": container with ID starting with 904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823 not found: ID does not exist" containerID="904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823"
	time="2025-11-26T20:20:59Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823\": container with ID starting with 904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823 not found: ID does not exist"
	
	** /stderr **
	I1126 20:20:59.166421  216504 logs.go:123] Gathering logs for kube-controller-manager [4fb35e453473ab894694b4123b0caaba84eb921f83bf9f12a54289a43ef5b354] ...
	I1126 20:20:59.166432  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4fb35e453473ab894694b4123b0caaba84eb921f83bf9f12a54289a43ef5b354"
	I1126 20:20:59.198370  216504 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:20:59.198395  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:21:01.744507  216504 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1126 20:21:01.744893  216504 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1126 20:21:01.744945  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:21:01.745002  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:21:01.779949  216504 cri.go:89] found id: "a691af7ac3ed29c9fd271ce5327631b6b984af6e8860263f5f63b06c02387431"
	I1126 20:21:01.779966  216504 cri.go:89] found id: ""
	I1126 20:21:01.779974  216504 logs.go:282] 1 containers: [a691af7ac3ed29c9fd271ce5327631b6b984af6e8860263f5f63b06c02387431]
	I1126 20:21:01.780026  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:21:01.783582  216504 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:21:01.783640  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:21:01.816786  216504 cri.go:89] found id: "e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280"
	I1126 20:21:01.816802  216504 cri.go:89] found id: ""
	I1126 20:21:01.816810  216504 logs.go:282] 1 containers: [e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280]
	I1126 20:21:01.816856  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:21:01.820211  216504 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:21:01.820266  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:21:01.853845  216504 cri.go:89] found id: ""
	I1126 20:21:01.853870  216504 logs.go:282] 0 containers: []
	W1126 20:21:01.853876  216504 logs.go:284] No container was found matching "coredns"
	I1126 20:21:01.853882  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:21:01.853935  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:21:01.886072  216504 cri.go:89] found id: "b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915"
	I1126 20:21:01.886088  216504 cri.go:89] found id: ""
	I1126 20:21:01.886095  216504 logs.go:282] 1 containers: [b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915]
	I1126 20:21:01.886147  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:21:01.889487  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:21:01.889540  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:21:01.921561  216504 cri.go:89] found id: ""
	I1126 20:21:01.921580  216504 logs.go:282] 0 containers: []
	W1126 20:21:01.921587  216504 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:21:01.921593  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:21:01.921630  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:21:01.955564  216504 cri.go:89] found id: "4fb35e453473ab894694b4123b0caaba84eb921f83bf9f12a54289a43ef5b354"
	I1126 20:21:01.955584  216504 cri.go:89] found id: "583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e"
	I1126 20:21:01.955590  216504 cri.go:89] found id: ""
	I1126 20:21:01.955598  216504 logs.go:282] 2 containers: [4fb35e453473ab894694b4123b0caaba84eb921f83bf9f12a54289a43ef5b354 583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e]
	I1126 20:21:01.955652  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:21:01.959137  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:21:01.962442  216504 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:21:01.962504  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:21:01.994064  216504 cri.go:89] found id: ""
	I1126 20:21:01.994084  216504 logs.go:282] 0 containers: []
	W1126 20:21:01.994093  216504 logs.go:284] No container was found matching "kindnet"
	I1126 20:21:01.994099  216504 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:21:01.994146  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:21:02.026601  216504 cri.go:89] found id: ""
	I1126 20:21:02.026626  216504 logs.go:282] 0 containers: []
	W1126 20:21:02.026635  216504 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:21:02.026652  216504 logs.go:123] Gathering logs for dmesg ...
	I1126 20:21:02.026669  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:21:02.041107  216504 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:21:02.041128  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:21:02.098164  216504 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:21:02.098185  216504 logs.go:123] Gathering logs for etcd [e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280] ...
	I1126 20:21:02.098199  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280"
	I1126 20:21:02.130561  216504 logs.go:123] Gathering logs for kube-controller-manager [4fb35e453473ab894694b4123b0caaba84eb921f83bf9f12a54289a43ef5b354] ...
	I1126 20:21:02.130587  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4fb35e453473ab894694b4123b0caaba84eb921f83bf9f12a54289a43ef5b354"
	I1126 20:21:02.162989  216504 logs.go:123] Gathering logs for container status ...
	I1126 20:21:02.163023  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:21:02.199056  216504 logs.go:123] Gathering logs for kube-apiserver [a691af7ac3ed29c9fd271ce5327631b6b984af6e8860263f5f63b06c02387431] ...
	I1126 20:21:02.199082  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a691af7ac3ed29c9fd271ce5327631b6b984af6e8860263f5f63b06c02387431"
	I1126 20:21:02.234644  216504 logs.go:123] Gathering logs for kube-scheduler [b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915] ...
	I1126 20:21:02.234670  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915"
	I1126 20:21:02.310868  216504 logs.go:123] Gathering logs for kube-controller-manager [583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e] ...
	I1126 20:21:02.310894  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e"
	I1126 20:21:02.344534  216504 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:21:02.344558  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:21:02.389597  216504 logs.go:123] Gathering logs for kubelet ...
	I1126 20:21:02.389621  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:21:03.219973  211567 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1126 20:21:03.220340  211567 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1126 20:21:03.220391  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:21:03.220441  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:21:03.246482  211567 cri.go:89] found id: "30d851bfffd1c5f2226fc9246c022d3359bd9a3092ddd6e984b665101217c12c"
	I1126 20:21:03.246501  211567 cri.go:89] found id: ""
	I1126 20:21:03.246510  211567 logs.go:282] 1 containers: [30d851bfffd1c5f2226fc9246c022d3359bd9a3092ddd6e984b665101217c12c]
	I1126 20:21:03.246563  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:21:03.250137  211567 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:21:03.250185  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:21:03.275762  211567 cri.go:89] found id: ""
	I1126 20:21:03.275789  211567 logs.go:282] 0 containers: []
	W1126 20:21:03.275797  211567 logs.go:284] No container was found matching "etcd"
	I1126 20:21:03.275803  211567 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:21:03.275865  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:21:03.300254  211567 cri.go:89] found id: ""
	I1126 20:21:03.300277  211567 logs.go:282] 0 containers: []
	W1126 20:21:03.300286  211567 logs.go:284] No container was found matching "coredns"
	I1126 20:21:03.300293  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:21:03.300333  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:21:03.324707  211567 cri.go:89] found id: "b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3"
	I1126 20:21:03.324728  211567 cri.go:89] found id: ""
	I1126 20:21:03.324738  211567 logs.go:282] 1 containers: [b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3]
	I1126 20:21:03.324786  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:21:03.328172  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:21:03.328228  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:21:03.352636  211567 cri.go:89] found id: ""
	I1126 20:21:03.352656  211567 logs.go:282] 0 containers: []
	W1126 20:21:03.352665  211567 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:21:03.352671  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:21:03.352721  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:21:03.377985  211567 cri.go:89] found id: "43982b98b5aaa319f602167886267f7bf3f21c726ad09aab92936f4816a878f5"
	I1126 20:21:03.378005  211567 cri.go:89] found id: "cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e"
	I1126 20:21:03.378010  211567 cri.go:89] found id: ""
	I1126 20:21:03.378018  211567 logs.go:282] 2 containers: [43982b98b5aaa319f602167886267f7bf3f21c726ad09aab92936f4816a878f5 cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e]
	I1126 20:21:03.378066  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:21:03.381640  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:21:03.384940  211567 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:21:03.384988  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:21:03.409114  211567 cri.go:89] found id: ""
	I1126 20:21:03.409135  211567 logs.go:282] 0 containers: []
	W1126 20:21:03.409143  211567 logs.go:284] No container was found matching "kindnet"
	I1126 20:21:03.409150  211567 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:21:03.409198  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:21:03.433124  211567 cri.go:89] found id: ""
	I1126 20:21:03.433143  211567 logs.go:282] 0 containers: []
	W1126 20:21:03.433148  211567 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:21:03.433164  211567 logs.go:123] Gathering logs for kubelet ...
	I1126 20:21:03.433175  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:21:03.518659  211567 logs.go:123] Gathering logs for dmesg ...
	I1126 20:21:03.518688  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:21:03.532126  211567 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:21:03.532151  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:21:03.584472  211567 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:21:03.584490  211567 logs.go:123] Gathering logs for kube-controller-manager [43982b98b5aaa319f602167886267f7bf3f21c726ad09aab92936f4816a878f5] ...
	I1126 20:21:03.584504  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 43982b98b5aaa319f602167886267f7bf3f21c726ad09aab92936f4816a878f5"
	I1126 20:21:03.608998  211567 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:21:03.609021  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:21:03.654905  211567 logs.go:123] Gathering logs for container status ...
	I1126 20:21:03.654929  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:21:03.683141  211567 logs.go:123] Gathering logs for kube-apiserver [30d851bfffd1c5f2226fc9246c022d3359bd9a3092ddd6e984b665101217c12c] ...
	I1126 20:21:03.683162  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 30d851bfffd1c5f2226fc9246c022d3359bd9a3092ddd6e984b665101217c12c"
	I1126 20:21:03.714240  211567 logs.go:123] Gathering logs for kube-scheduler [b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3] ...
	I1126 20:21:03.714263  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3"
	I1126 20:21:03.765079  211567 logs.go:123] Gathering logs for kube-controller-manager [cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e] ...
	I1126 20:21:03.765103  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e"
	
	
	==> CRI-O <==
	Nov 26 20:20:52 old-k8s-version-157431 crio[568]: time="2025-11-26T20:20:52.825201614Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/f238ce5579b9631d14d43fb7c6eca63d6e4841c169ba341fef637c933dae6182/merged/etc/group: no such file or directory"
	Nov 26 20:20:52 old-k8s-version-157431 crio[568]: time="2025-11-26T20:20:52.825639827Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:20:52 old-k8s-version-157431 crio[568]: time="2025-11-26T20:20:52.863843797Z" level=info msg="Created container d0ff3b383353d86e05f922635c797f096102d6ebe358f4b279634a688ecbe2fb: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-j28gs/kubernetes-dashboard" id=1840d4db-9ab8-4802-a1b7-21f3cf55fbfa name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:20:52 old-k8s-version-157431 crio[568]: time="2025-11-26T20:20:52.864507025Z" level=info msg="Starting container: d0ff3b383353d86e05f922635c797f096102d6ebe358f4b279634a688ecbe2fb" id=63f03e11-4981-4b95-bd74-c4f8f9a6ab11 name=/runtime.v1.RuntimeService/StartContainer
	Nov 26 20:20:52 old-k8s-version-157431 crio[568]: time="2025-11-26T20:20:52.866413087Z" level=info msg="Started container" PID=1528 containerID=d0ff3b383353d86e05f922635c797f096102d6ebe358f4b279634a688ecbe2fb description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-j28gs/kubernetes-dashboard id=63f03e11-4981-4b95-bd74-c4f8f9a6ab11 name=/runtime.v1.RuntimeService/StartContainer sandboxID=03ce806a37f7730413b18862cb23a8aa136b96e1985e43c29a4ff45bfa8d1a4f
	Nov 26 20:20:55 old-k8s-version-157431 crio[568]: time="2025-11-26T20:20:55.183650797Z" level=info msg="Pulled image: registry.k8s.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" id=9031b449-0145-417a-80d2-7d852a18fcaf name=/runtime.v1.ImageService/PullImage
	Nov 26 20:20:55 old-k8s-version-157431 crio[568]: time="2025-11-26T20:20:55.184349035Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=3f8898a0-2e03-4625-84ce-a77b4aa4ae76 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:20:55 old-k8s-version-157431 crio[568]: time="2025-11-26T20:20:55.187486419Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jqrrz/dashboard-metrics-scraper" id=3f140241-96b1-492a-90ca-b0f2e9b38d6d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:20:55 old-k8s-version-157431 crio[568]: time="2025-11-26T20:20:55.187602313Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:20:55 old-k8s-version-157431 crio[568]: time="2025-11-26T20:20:55.193739677Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:20:55 old-k8s-version-157431 crio[568]: time="2025-11-26T20:20:55.194206911Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:20:55 old-k8s-version-157431 crio[568]: time="2025-11-26T20:20:55.215038919Z" level=info msg="Created container e025c2e925b2866a2de52ab210cdbb69712e60e5b6acfdd84cf87abce99d23f0: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jqrrz/dashboard-metrics-scraper" id=3f140241-96b1-492a-90ca-b0f2e9b38d6d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:20:55 old-k8s-version-157431 crio[568]: time="2025-11-26T20:20:55.215535469Z" level=info msg="Starting container: e025c2e925b2866a2de52ab210cdbb69712e60e5b6acfdd84cf87abce99d23f0" id=02b0e6a2-f6aa-4382-9adf-b94158bb7367 name=/runtime.v1.RuntimeService/StartContainer
	Nov 26 20:20:55 old-k8s-version-157431 crio[568]: time="2025-11-26T20:20:55.217125533Z" level=info msg="Started container" PID=1755 containerID=e025c2e925b2866a2de52ab210cdbb69712e60e5b6acfdd84cf87abce99d23f0 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jqrrz/dashboard-metrics-scraper id=02b0e6a2-f6aa-4382-9adf-b94158bb7367 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d214e4e1535f0ed3867d04ef1dc64416c3bf57faf76db7250d67ba32eb41422a
	Nov 26 20:20:55 old-k8s-version-157431 crio[568]: time="2025-11-26T20:20:55.247125219Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=0363d334-a6ce-4449-85bf-722a2db67370 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:20:55 old-k8s-version-157431 crio[568]: time="2025-11-26T20:20:55.249654036Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=aedab969-88b3-475f-bdee-a70571b950bb name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:20:55 old-k8s-version-157431 crio[568]: time="2025-11-26T20:20:55.253724688Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jqrrz/dashboard-metrics-scraper" id=f5bfe5a1-b1ce-4e68-b731-b72f546d8ffb name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:20:55 old-k8s-version-157431 crio[568]: time="2025-11-26T20:20:55.253840461Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:20:55 old-k8s-version-157431 crio[568]: time="2025-11-26T20:20:55.260871164Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:20:55 old-k8s-version-157431 crio[568]: time="2025-11-26T20:20:55.261318535Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:20:55 old-k8s-version-157431 crio[568]: time="2025-11-26T20:20:55.283581507Z" level=info msg="Created container 55649a60515f7f70c4cad7df626752d394192d9164d58b7e1c46a43be8398fa6: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jqrrz/dashboard-metrics-scraper" id=f5bfe5a1-b1ce-4e68-b731-b72f546d8ffb name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:20:55 old-k8s-version-157431 crio[568]: time="2025-11-26T20:20:55.284118156Z" level=info msg="Starting container: 55649a60515f7f70c4cad7df626752d394192d9164d58b7e1c46a43be8398fa6" id=f9f0a2b1-6230-4fa9-91ed-8543a715e5cf name=/runtime.v1.RuntimeService/StartContainer
	Nov 26 20:20:55 old-k8s-version-157431 crio[568]: time="2025-11-26T20:20:55.285851029Z" level=info msg="Started container" PID=1766 containerID=55649a60515f7f70c4cad7df626752d394192d9164d58b7e1c46a43be8398fa6 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jqrrz/dashboard-metrics-scraper id=f9f0a2b1-6230-4fa9-91ed-8543a715e5cf name=/runtime.v1.RuntimeService/StartContainer sandboxID=d214e4e1535f0ed3867d04ef1dc64416c3bf57faf76db7250d67ba32eb41422a
	Nov 26 20:20:56 old-k8s-version-157431 crio[568]: time="2025-11-26T20:20:56.252422538Z" level=info msg="Removing container: e025c2e925b2866a2de52ab210cdbb69712e60e5b6acfdd84cf87abce99d23f0" id=3c7d0ed3-a4ee-4355-84a5-6a136dab631a name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 26 20:20:56 old-k8s-version-157431 crio[568]: time="2025-11-26T20:20:56.261483021Z" level=info msg="Removed container e025c2e925b2866a2de52ab210cdbb69712e60e5b6acfdd84cf87abce99d23f0: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jqrrz/dashboard-metrics-scraper" id=3c7d0ed3-a4ee-4355-84a5-6a136dab631a name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	55649a60515f7       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           11 seconds ago      Exited              dashboard-metrics-scraper   1                   d214e4e1535f0       dashboard-metrics-scraper-5f989dc9cf-jqrrz       kubernetes-dashboard
	d0ff3b383353d       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   14 seconds ago      Running             kubernetes-dashboard        0                   03ce806a37f77       kubernetes-dashboard-8694d4445c-j28gs            kubernetes-dashboard
	6c6f323935b9b       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           27 seconds ago      Running             coredns                     0                   37c137e30da13       coredns-5dd5756b68-jhrhx                         kube-system
	165a11a2acfbf       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           27 seconds ago      Running             busybox                     1                   e040ebfb68037       busybox                                          default
	accee1a5d908d       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           30 seconds ago      Running             kube-proxy                  0                   4b386be27b3bb       kube-proxy-qqdfx                                 kube-system
	39908a2bff30f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           30 seconds ago      Exited              storage-provisioner         0                   409f8abc87234       storage-provisioner                              kube-system
	16fda5da153ba       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           30 seconds ago      Running             kindnet-cni                 0                   ea0c8984ec22c       kindnet-zlg4b                                    kube-system
	a504f533180fa       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           33 seconds ago      Running             kube-controller-manager     0                   20182ff90baa7       kube-controller-manager-old-k8s-version-157431   kube-system
	d8d8479be421b       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           33 seconds ago      Running             kube-apiserver              0                   bad2a581a38fb       kube-apiserver-old-k8s-version-157431            kube-system
	9646e408ccc61       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           33 seconds ago      Running             etcd                        0                   0114291b18acb       etcd-old-k8s-version-157431                      kube-system
	abbeedf1745d5       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           33 seconds ago      Running             kube-scheduler              0                   5108ec064f552       kube-scheduler-old-k8s-version-157431            kube-system
	
	
	==> coredns [6c6f323935b9bb8fa8c4aab620011fbe3b488cd55cf74ff43d5189c233b9a31a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:55998 - 5028 "HINFO IN 4798133491747807030.4566811342291500471. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.063988172s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-157431
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-157431
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970
	                    minikube.k8s.io/name=old-k8s-version-157431
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_26T20_19_31_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 26 Nov 2025 20:19:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-157431
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 26 Nov 2025 20:20:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 26 Nov 2025 20:20:46 +0000   Wed, 26 Nov 2025 20:19:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 26 Nov 2025 20:20:46 +0000   Wed, 26 Nov 2025 20:19:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 26 Nov 2025 20:20:46 +0000   Wed, 26 Nov 2025 20:19:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 26 Nov 2025 20:20:46 +0000   Wed, 26 Nov 2025 20:20:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-157431
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                55f945af-c138-4761-b59d-13bed6931065
	  Boot ID:                    4ab83601-3d95-4490-96bc-14b416ba2714
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         67s
	  kube-system                 coredns-5dd5756b68-jhrhx                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     83s
	  kube-system                 etcd-old-k8s-version-157431                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         96s
	  kube-system                 kindnet-zlg4b                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      83s
	  kube-system                 kube-apiserver-old-k8s-version-157431             250m (3%)     0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 kube-controller-manager-old-k8s-version-157431    200m (2%)     0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 kube-proxy-qqdfx                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         83s
	  kube-system                 kube-scheduler-old-k8s-version-157431             100m (1%)     0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         82s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-jqrrz        0 (0%)        0 (0%)      0 (0%)           0 (0%)         18s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-j28gs             0 (0%)        0 (0%)      0 (0%)           0 (0%)         18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 82s                kube-proxy       
	  Normal  Starting                 30s                kube-proxy       
	  Normal  NodeHasSufficientMemory  96s                kubelet          Node old-k8s-version-157431 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    96s                kubelet          Node old-k8s-version-157431 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     96s                kubelet          Node old-k8s-version-157431 status is now: NodeHasSufficientPID
	  Normal  Starting                 96s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           84s                node-controller  Node old-k8s-version-157431 event: Registered Node old-k8s-version-157431 in Controller
	  Normal  NodeReady                71s                kubelet          Node old-k8s-version-157431 status is now: NodeReady
	  Normal  Starting                 34s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  34s (x9 over 34s)  kubelet          Node old-k8s-version-157431 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    34s (x8 over 34s)  kubelet          Node old-k8s-version-157431 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     34s (x7 over 34s)  kubelet          Node old-k8s-version-157431 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           19s                node-controller  Node old-k8s-version-157431 event: Registered Node old-k8s-version-157431 in Controller
	
	
	==> dmesg <==
	[  +0.091832] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023778] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.010162] kauditd_printk_skb: 47 callbacks suppressed
	[Nov26 19:37] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.011177] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.022896] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.024869] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.022915] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.023864] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +2.047793] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +4.032568] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +8.126198] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[ +16.382331] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[Nov26 19:38] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	
	
	==> etcd [9646e408ccc61c8cc3bf687319bc9e134650a818bbe9f04715d5a74998506dc6] <==
	{"level":"info","ts":"2025-11-26T20:20:33.742482Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-26T20:20:33.742498Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-26T20:20:33.742832Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-11-26T20:20:33.742951Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-11-26T20:20:33.743152Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-26T20:20:33.743224Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-26T20:20:33.744518Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-26T20:20:33.744618Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-26T20:20:33.744642Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-26T20:20:33.744747Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-26T20:20:33.744832Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-26T20:20:34.732051Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-26T20:20:34.73209Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-26T20:20:34.732104Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-26T20:20:34.732135Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-11-26T20:20:34.732141Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-11-26T20:20:34.732149Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-11-26T20:20:34.732155Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-11-26T20:20:34.733099Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-157431 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-26T20:20:34.733126Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-26T20:20:34.733113Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-26T20:20:34.733377Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-26T20:20:34.733405Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-26T20:20:34.734527Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-11-26T20:20:34.734537Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 20:21:07 up  1:03,  0 user,  load average: 2.98, 2.93, 1.89
	Linux old-k8s-version-157431 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [16fda5da153ba465a321b8058a258b00de264660a2e64bf42345ae13e96d6044] <==
	I1126 20:20:36.721553       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1126 20:20:36.721760       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1126 20:20:36.721876       1 main.go:148] setting mtu 1500 for CNI 
	I1126 20:20:36.721895       1 main.go:178] kindnetd IP family: "ipv4"
	I1126 20:20:36.721907       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-26T20:20:36Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1126 20:20:36.921678       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1126 20:20:36.921710       1 controller.go:381] "Waiting for informer caches to sync"
	I1126 20:20:36.921722       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1126 20:20:36.922166       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1126 20:20:37.422448       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1126 20:20:37.422488       1 metrics.go:72] Registering metrics
	I1126 20:20:37.422554       1 controller.go:711] "Syncing nftables rules"
	I1126 20:20:46.921936       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1126 20:20:46.922020       1 main.go:301] handling current node
	I1126 20:20:56.922075       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1126 20:20:56.922142       1 main.go:301] handling current node
	I1126 20:21:06.927568       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1126 20:21:06.927597       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d8d8479be421b12e08c13d54e96b2828f60bf10165cebecd5a7fe6990720f66d] <==
	I1126 20:20:35.750670       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1126 20:20:35.750717       1 aggregator.go:166] initial CRD sync complete...
	I1126 20:20:35.750729       1 autoregister_controller.go:141] Starting autoregister controller
	I1126 20:20:35.750737       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1126 20:20:35.750745       1 cache.go:39] Caches are synced for autoregister controller
	I1126 20:20:35.750779       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1126 20:20:35.750863       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1126 20:20:35.750944       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1126 20:20:35.750866       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1126 20:20:35.750873       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1126 20:20:35.751011       1 shared_informer.go:318] Caches are synced for configmaps
	E1126 20:20:35.755813       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1126 20:20:35.783960       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1126 20:20:35.794422       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1126 20:20:36.490515       1 controller.go:624] quota admission added evaluator for: namespaces
	I1126 20:20:36.527090       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1126 20:20:36.546779       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1126 20:20:36.552842       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1126 20:20:36.560713       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1126 20:20:36.592026       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.220.69"}
	I1126 20:20:36.605689       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.173.54"}
	I1126 20:20:36.653688       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1126 20:20:48.797261       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1126 20:20:48.997102       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1126 20:20:49.047112       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [a504f533180fa3660ae26e19df70e0f8b5e648c9eac05e818e5a34d99dfa556d] <==
	I1126 20:20:48.794697       1 shared_informer.go:318] Caches are synced for disruption
	I1126 20:20:48.807804       1 shared_informer.go:318] Caches are synced for crt configmap
	I1126 20:20:48.813044       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I1126 20:20:48.843391       1 shared_informer.go:318] Caches are synced for resource quota
	I1126 20:20:48.902740       1 shared_informer.go:318] Caches are synced for resource quota
	I1126 20:20:49.000502       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1126 20:20:49.001507       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1126 20:20:49.201429       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-j28gs"
	I1126 20:20:49.201826       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-jqrrz"
	I1126 20:20:49.207534       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="207.259884ms"
	I1126 20:20:49.207795       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="206.533497ms"
	I1126 20:20:49.212969       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="5.385867ms"
	I1126 20:20:49.213059       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="54.843µs"
	I1126 20:20:49.214129       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="6.287683ms"
	I1126 20:20:49.214208       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="43.886µs"
	I1126 20:20:49.218021       1 shared_informer.go:318] Caches are synced for garbage collector
	I1126 20:20:49.218201       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="49.288µs"
	I1126 20:20:49.224825       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="38.935µs"
	I1126 20:20:49.244482       1 shared_informer.go:318] Caches are synced for garbage collector
	I1126 20:20:49.244501       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1126 20:20:53.260420       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="6.553237ms"
	I1126 20:20:53.260536       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="72.139µs"
	I1126 20:20:55.256592       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="63.614µs"
	I1126 20:20:56.262187       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="73.842µs"
	I1126 20:20:57.264403       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="108.476µs"
	
	
	==> kube-proxy [accee1a5d908d0156207ba0fe0bd3d00a19a4cdf28557c3848fd5a55e1fa67e5] <==
	I1126 20:20:36.569106       1 server_others.go:69] "Using iptables proxy"
	I1126 20:20:36.578316       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1126 20:20:36.598855       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1126 20:20:36.601351       1 server_others.go:152] "Using iptables Proxier"
	I1126 20:20:36.601377       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1126 20:20:36.601384       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1126 20:20:36.601414       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1126 20:20:36.601669       1 server.go:846] "Version info" version="v1.28.0"
	I1126 20:20:36.601686       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 20:20:36.602277       1 config.go:188] "Starting service config controller"
	I1126 20:20:36.602308       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1126 20:20:36.602285       1 config.go:97] "Starting endpoint slice config controller"
	I1126 20:20:36.602341       1 config.go:315] "Starting node config controller"
	I1126 20:20:36.602354       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1126 20:20:36.602365       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1126 20:20:36.702802       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1126 20:20:36.702827       1 shared_informer.go:318] Caches are synced for node config
	I1126 20:20:36.702811       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [abbeedf1745d559c442cf4064712a5660d93918ca1d77d450979a8d7cb48fd5b] <==
	I1126 20:20:34.139562       1 serving.go:348] Generated self-signed cert in-memory
	W1126 20:20:35.675992       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1126 20:20:35.676025       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1126 20:20:35.676039       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1126 20:20:35.676049       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1126 20:20:35.711129       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1126 20:20:35.711166       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 20:20:35.712832       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1126 20:20:35.712881       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1126 20:20:35.713988       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1126 20:20:35.714064       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1126 20:20:35.813672       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 26 20:20:37 old-k8s-version-157431 kubelet[735]: E1126 20:20:37.700245     735 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 26 20:20:37 old-k8s-version-157431 kubelet[735]: E1126 20:20:37.700342     735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/483a52cf-1d0a-4b51-b9b1-d986b07fa545-config-volume podName:483a52cf-1d0a-4b51-b9b1-d986b07fa545 nodeName:}" failed. No retries permitted until 2025-11-26 20:20:39.700319627 +0000 UTC m=+6.605603985 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/483a52cf-1d0a-4b51-b9b1-d986b07fa545-config-volume") pod "coredns-5dd5756b68-jhrhx" (UID: "483a52cf-1d0a-4b51-b9b1-d986b07fa545") : object "kube-system"/"coredns" not registered
	Nov 26 20:20:37 old-k8s-version-157431 kubelet[735]: E1126 20:20:37.901339     735 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Nov 26 20:20:37 old-k8s-version-157431 kubelet[735]: E1126 20:20:37.901371     735 projected.go:198] Error preparing data for projected volume kube-api-access-kgqzr for pod default/busybox: object "default"/"kube-root-ca.crt" not registered
	Nov 26 20:20:37 old-k8s-version-157431 kubelet[735]: E1126 20:20:37.901430     735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d6c41f35-cc7b-423c-b8e2-76531e7a8b3b-kube-api-access-kgqzr podName:d6c41f35-cc7b-423c-b8e2-76531e7a8b3b nodeName:}" failed. No retries permitted until 2025-11-26 20:20:39.901415518 +0000 UTC m=+6.806699872 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-kgqzr" (UniqueName: "kubernetes.io/projected/d6c41f35-cc7b-423c-b8e2-76531e7a8b3b-kube-api-access-kgqzr") pod "busybox" (UID: "d6c41f35-cc7b-423c-b8e2-76531e7a8b3b") : object "default"/"kube-root-ca.crt" not registered
	Nov 26 20:20:41 old-k8s-version-157431 kubelet[735]: I1126 20:20:41.570413     735 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 26 20:20:49 old-k8s-version-157431 kubelet[735]: I1126 20:20:49.208235     735 topology_manager.go:215] "Topology Admit Handler" podUID="8c0023d6-bea5-44e3-bfee-5f411cad2ae6" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-jqrrz"
	Nov 26 20:20:49 old-k8s-version-157431 kubelet[735]: I1126 20:20:49.209952     735 topology_manager.go:215] "Topology Admit Handler" podUID="bcb842e0-68ab-415a-9899-b57f19282469" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-j28gs"
	Nov 26 20:20:49 old-k8s-version-157431 kubelet[735]: I1126 20:20:49.259348     735 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpxr8\" (UniqueName: \"kubernetes.io/projected/bcb842e0-68ab-415a-9899-b57f19282469-kube-api-access-wpxr8\") pod \"kubernetes-dashboard-8694d4445c-j28gs\" (UID: \"bcb842e0-68ab-415a-9899-b57f19282469\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-j28gs"
	Nov 26 20:20:49 old-k8s-version-157431 kubelet[735]: I1126 20:20:49.259393     735 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66dkk\" (UniqueName: \"kubernetes.io/projected/8c0023d6-bea5-44e3-bfee-5f411cad2ae6-kube-api-access-66dkk\") pod \"dashboard-metrics-scraper-5f989dc9cf-jqrrz\" (UID: \"8c0023d6-bea5-44e3-bfee-5f411cad2ae6\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jqrrz"
	Nov 26 20:20:49 old-k8s-version-157431 kubelet[735]: I1126 20:20:49.259416     735 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/bcb842e0-68ab-415a-9899-b57f19282469-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-j28gs\" (UID: \"bcb842e0-68ab-415a-9899-b57f19282469\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-j28gs"
	Nov 26 20:20:49 old-k8s-version-157431 kubelet[735]: I1126 20:20:49.259443     735 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/8c0023d6-bea5-44e3-bfee-5f411cad2ae6-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-jqrrz\" (UID: \"8c0023d6-bea5-44e3-bfee-5f411cad2ae6\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jqrrz"
	Nov 26 20:20:53 old-k8s-version-157431 kubelet[735]: I1126 20:20:53.253512     735 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-j28gs" podStartSLOduration=0.968626714 podCreationTimestamp="2025-11-26 20:20:49 +0000 UTC" firstStartedPulling="2025-11-26 20:20:49.531155338 +0000 UTC m=+16.436439705" lastFinishedPulling="2025-11-26 20:20:52.815952854 +0000 UTC m=+19.721237221" observedRunningTime="2025-11-26 20:20:53.253366192 +0000 UTC m=+20.158650567" watchObservedRunningTime="2025-11-26 20:20:53.25342423 +0000 UTC m=+20.158708605"
	Nov 26 20:20:55 old-k8s-version-157431 kubelet[735]: I1126 20:20:55.246689     735 scope.go:117] "RemoveContainer" containerID="e025c2e925b2866a2de52ab210cdbb69712e60e5b6acfdd84cf87abce99d23f0"
	Nov 26 20:20:56 old-k8s-version-157431 kubelet[735]: I1126 20:20:56.251131     735 scope.go:117] "RemoveContainer" containerID="e025c2e925b2866a2de52ab210cdbb69712e60e5b6acfdd84cf87abce99d23f0"
	Nov 26 20:20:56 old-k8s-version-157431 kubelet[735]: I1126 20:20:56.251310     735 scope.go:117] "RemoveContainer" containerID="55649a60515f7f70c4cad7df626752d394192d9164d58b7e1c46a43be8398fa6"
	Nov 26 20:20:56 old-k8s-version-157431 kubelet[735]: E1126 20:20:56.251718     735 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-jqrrz_kubernetes-dashboard(8c0023d6-bea5-44e3-bfee-5f411cad2ae6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jqrrz" podUID="8c0023d6-bea5-44e3-bfee-5f411cad2ae6"
	Nov 26 20:20:57 old-k8s-version-157431 kubelet[735]: I1126 20:20:57.254571     735 scope.go:117] "RemoveContainer" containerID="55649a60515f7f70c4cad7df626752d394192d9164d58b7e1c46a43be8398fa6"
	Nov 26 20:20:57 old-k8s-version-157431 kubelet[735]: E1126 20:20:57.254863     735 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-jqrrz_kubernetes-dashboard(8c0023d6-bea5-44e3-bfee-5f411cad2ae6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jqrrz" podUID="8c0023d6-bea5-44e3-bfee-5f411cad2ae6"
	Nov 26 20:20:59 old-k8s-version-157431 kubelet[735]: I1126 20:20:59.509380     735 scope.go:117] "RemoveContainer" containerID="55649a60515f7f70c4cad7df626752d394192d9164d58b7e1c46a43be8398fa6"
	Nov 26 20:20:59 old-k8s-version-157431 kubelet[735]: E1126 20:20:59.509858     735 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-jqrrz_kubernetes-dashboard(8c0023d6-bea5-44e3-bfee-5f411cad2ae6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jqrrz" podUID="8c0023d6-bea5-44e3-bfee-5f411cad2ae6"
	Nov 26 20:21:04 old-k8s-version-157431 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 26 20:21:04 old-k8s-version-157431 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 26 20:21:04 old-k8s-version-157431 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 26 20:21:04 old-k8s-version-157431 systemd[1]: kubelet.service: Consumed 1.024s CPU time.
	
	
	==> kubernetes-dashboard [d0ff3b383353d86e05f922635c797f096102d6ebe358f4b279634a688ecbe2fb] <==
	2025/11/26 20:20:52 Starting overwatch
	2025/11/26 20:20:52 Using namespace: kubernetes-dashboard
	2025/11/26 20:20:52 Using in-cluster config to connect to apiserver
	2025/11/26 20:20:52 Using secret token for csrf signing
	2025/11/26 20:20:52 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/26 20:20:52 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/26 20:20:52 Successful initial request to the apiserver, version: v1.28.0
	2025/11/26 20:20:52 Generating JWE encryption key
	2025/11/26 20:20:52 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/26 20:20:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/26 20:20:53 Initializing JWE encryption key from synchronized object
	2025/11/26 20:20:53 Creating in-cluster Sidecar client
	2025/11/26 20:20:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/26 20:20:53 Serving insecurely on HTTP port: 9090
	
	
	==> storage-provisioner [39908a2bff30f053ae4e66e2b1f1ed4846ab95ad78050fe44589cb96f7e2ddb9] <==
	I1126 20:20:36.533276       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1126 20:21:06.536921       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-157431 -n old-k8s-version-157431
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-157431 -n old-k8s-version-157431: exit status 2 (322.482457ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-157431 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-157431
helpers_test.go:243: (dbg) docker inspect old-k8s-version-157431:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "77bb37b66fd7027025a22d6f5aea0b07be6fffaf6fe8b99efa3c3d7655886caf",
	        "Created": "2025-11-26T20:19:16.110022495Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 248374,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-26T20:20:27.036811807Z",
	            "FinishedAt": "2025-11-26T20:20:26.182757108Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/77bb37b66fd7027025a22d6f5aea0b07be6fffaf6fe8b99efa3c3d7655886caf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/77bb37b66fd7027025a22d6f5aea0b07be6fffaf6fe8b99efa3c3d7655886caf/hostname",
	        "HostsPath": "/var/lib/docker/containers/77bb37b66fd7027025a22d6f5aea0b07be6fffaf6fe8b99efa3c3d7655886caf/hosts",
	        "LogPath": "/var/lib/docker/containers/77bb37b66fd7027025a22d6f5aea0b07be6fffaf6fe8b99efa3c3d7655886caf/77bb37b66fd7027025a22d6f5aea0b07be6fffaf6fe8b99efa3c3d7655886caf-json.log",
	        "Name": "/old-k8s-version-157431",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-157431:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-157431",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "77bb37b66fd7027025a22d6f5aea0b07be6fffaf6fe8b99efa3c3d7655886caf",
	                "LowerDir": "/var/lib/docker/overlay2/3c973fc8b80ac671221ba557ff7dfd317e56a412bc6c5655554bd65755b08efc-init/diff:/var/lib/docker/overlay2/1661d775281cb7314a654558ee2c4e5880ffafb3a27a085032449cd60a68753a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3c973fc8b80ac671221ba557ff7dfd317e56a412bc6c5655554bd65755b08efc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3c973fc8b80ac671221ba557ff7dfd317e56a412bc6c5655554bd65755b08efc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3c973fc8b80ac671221ba557ff7dfd317e56a412bc6c5655554bd65755b08efc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-157431",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-157431/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-157431",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-157431",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-157431",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "7a6192aeaf4e67c796ad61fee172ea0757828251dfb01a56f7aa51e613593c11",
	            "SandboxKey": "/var/run/docker/netns/7a6192aeaf4e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33058"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33059"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33062"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33060"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33061"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-157431": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5d4f1dd69a726aa0138274371b25ff8174904f4f402419e4752de500c743a887",
	                    "EndpointID": "0b30ba07803e4894731f3e73b76fc587179d5ca8d57350c4dad694b61f719e32",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "22:23:97:a5:5b:ee",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-157431",
	                        "77bb37b66fd7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-157431 -n old-k8s-version-157431
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-157431 -n old-k8s-version-157431: exit status 2 (318.615435ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-157431 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-157431 logs -n 25: (1.047283056s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cilium-825702 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ cilium-825702          │ jenkins │ v1.37.0 │ 26 Nov 25 20:18 UTC │                     │
	│ ssh     │ -p cilium-825702 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-825702          │ jenkins │ v1.37.0 │ 26 Nov 25 20:18 UTC │                     │
	│ ssh     │ -p cilium-825702 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-825702          │ jenkins │ v1.37.0 │ 26 Nov 25 20:18 UTC │                     │
	│ ssh     │ -p cilium-825702 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-825702          │ jenkins │ v1.37.0 │ 26 Nov 25 20:18 UTC │                     │
	│ ssh     │ -p cilium-825702 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-825702          │ jenkins │ v1.37.0 │ 26 Nov 25 20:18 UTC │                     │
	│ ssh     │ -p cilium-825702 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-825702          │ jenkins │ v1.37.0 │ 26 Nov 25 20:18 UTC │                     │
	│ ssh     │ -p cilium-825702 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-825702          │ jenkins │ v1.37.0 │ 26 Nov 25 20:18 UTC │                     │
	│ ssh     │ -p cilium-825702 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-825702          │ jenkins │ v1.37.0 │ 26 Nov 25 20:18 UTC │                     │
	│ ssh     │ -p cilium-825702 sudo containerd config dump                                                                                                                                                                                                  │ cilium-825702          │ jenkins │ v1.37.0 │ 26 Nov 25 20:18 UTC │                     │
	│ ssh     │ -p cilium-825702 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-825702          │ jenkins │ v1.37.0 │ 26 Nov 25 20:18 UTC │                     │
	│ ssh     │ -p cilium-825702 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-825702          │ jenkins │ v1.37.0 │ 26 Nov 25 20:18 UTC │                     │
	│ ssh     │ -p cilium-825702 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-825702          │ jenkins │ v1.37.0 │ 26 Nov 25 20:18 UTC │                     │
	│ ssh     │ -p cilium-825702 sudo crio config                                                                                                                                                                                                             │ cilium-825702          │ jenkins │ v1.37.0 │ 26 Nov 25 20:18 UTC │                     │
	│ delete  │ -p cilium-825702                                                                                                                                                                                                                              │ cilium-825702          │ jenkins │ v1.37.0 │ 26 Nov 25 20:18 UTC │ 26 Nov 25 20:18 UTC │
	│ start   │ -p cert-options-706331 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-706331    │ jenkins │ v1.37.0 │ 26 Nov 25 20:18 UTC │ 26 Nov 25 20:19 UTC │
	│ ssh     │ cert-options-706331 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-706331    │ jenkins │ v1.37.0 │ 26 Nov 25 20:19 UTC │ 26 Nov 25 20:19 UTC │
	│ ssh     │ -p cert-options-706331 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-706331    │ jenkins │ v1.37.0 │ 26 Nov 25 20:19 UTC │ 26 Nov 25 20:19 UTC │
	│ delete  │ -p cert-options-706331                                                                                                                                                                                                                        │ cert-options-706331    │ jenkins │ v1.37.0 │ 26 Nov 25 20:19 UTC │ 26 Nov 25 20:19 UTC │
	│ start   │ -p old-k8s-version-157431 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-157431 │ jenkins │ v1.37.0 │ 26 Nov 25 20:19 UTC │ 26 Nov 25 20:19 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-157431 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-157431 │ jenkins │ v1.37.0 │ 26 Nov 25 20:20 UTC │                     │
	│ stop    │ -p old-k8s-version-157431 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-157431 │ jenkins │ v1.37.0 │ 26 Nov 25 20:20 UTC │ 26 Nov 25 20:20 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-157431 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-157431 │ jenkins │ v1.37.0 │ 26 Nov 25 20:20 UTC │ 26 Nov 25 20:20 UTC │
	│ start   │ -p old-k8s-version-157431 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-157431 │ jenkins │ v1.37.0 │ 26 Nov 25 20:20 UTC │ 26 Nov 25 20:20 UTC │
	│ image   │ old-k8s-version-157431 image list --format=json                                                                                                                                                                                               │ old-k8s-version-157431 │ jenkins │ v1.37.0 │ 26 Nov 25 20:21 UTC │ 26 Nov 25 20:21 UTC │
	│ pause   │ -p old-k8s-version-157431 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-157431 │ jenkins │ v1.37.0 │ 26 Nov 25 20:21 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/26 20:20:26
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1126 20:20:26.818437  248170 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:20:26.818551  248170 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:20:26.818560  248170 out.go:374] Setting ErrFile to fd 2...
	I1126 20:20:26.818564  248170 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:20:26.818750  248170 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-10722/.minikube/bin
	I1126 20:20:26.819148  248170 out.go:368] Setting JSON to false
	I1126 20:20:26.820318  248170 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3777,"bootTime":1764184650,"procs":369,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1126 20:20:26.820373  248170 start.go:143] virtualization: kvm guest
	I1126 20:20:26.822194  248170 out.go:179] * [old-k8s-version-157431] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1126 20:20:26.823308  248170 notify.go:221] Checking for updates...
	I1126 20:20:26.823332  248170 out.go:179]   - MINIKUBE_LOCATION=21974
	I1126 20:20:26.824359  248170 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1126 20:20:26.825754  248170 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21974-10722/kubeconfig
	I1126 20:20:26.826897  248170 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-10722/.minikube
	I1126 20:20:26.828116  248170 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1126 20:20:26.829080  248170 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1126 20:20:26.830529  248170 config.go:182] Loaded profile config "old-k8s-version-157431": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1126 20:20:26.832158  248170 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1126 20:20:26.833246  248170 driver.go:422] Setting default libvirt URI to qemu:///system
	I1126 20:20:26.857357  248170 docker.go:124] docker version: linux-29.0.4:Docker Engine - Community
	I1126 20:20:26.857470  248170 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:20:26.911898  248170 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-26 20:20:26.901890798 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1126 20:20:26.911998  248170 docker.go:319] overlay module found
	I1126 20:20:26.913416  248170 out.go:179] * Using the docker driver based on existing profile
	I1126 20:20:26.914430  248170 start.go:309] selected driver: docker
	I1126 20:20:26.914440  248170 start.go:927] validating driver "docker" against &{Name:old-k8s-version-157431 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-157431 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:20:26.914530  248170 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1126 20:20:26.915062  248170 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:20:26.970248  248170 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-26 20:20:26.961278035 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1126 20:20:26.970546  248170 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1126 20:20:26.970576  248170 cni.go:84] Creating CNI manager for ""
	I1126 20:20:26.970628  248170 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:20:26.970664  248170 start.go:353] cluster config:
	{Name:old-k8s-version-157431 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-157431 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:20:26.972021  248170 out.go:179] * Starting "old-k8s-version-157431" primary control-plane node in "old-k8s-version-157431" cluster
	I1126 20:20:26.973050  248170 cache.go:134] Beginning downloading kic base image for docker with crio
	I1126 20:20:26.974201  248170 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1126 20:20:26.975251  248170 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1126 20:20:26.975284  248170 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21974-10722/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1126 20:20:26.975304  248170 cache.go:65] Caching tarball of preloaded images
	I1126 20:20:26.975344  248170 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1126 20:20:26.975393  248170 preload.go:238] Found /home/jenkins/minikube-integration/21974-10722/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1126 20:20:26.975404  248170 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1126 20:20:26.975539  248170 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/old-k8s-version-157431/config.json ...
	I1126 20:20:26.994764  248170 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1126 20:20:26.994783  248170 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1126 20:20:26.994797  248170 cache.go:243] Successfully downloaded all kic artifacts
	I1126 20:20:26.994821  248170 start.go:360] acquireMachinesLock for old-k8s-version-157431: {Name:mkea810daa6c92d5318c72561874a0f25d5c921b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1126 20:20:26.994879  248170 start.go:364] duration metric: took 34.603µs to acquireMachinesLock for "old-k8s-version-157431"
	I1126 20:20:26.994895  248170 start.go:96] Skipping create...Using existing machine configuration
	I1126 20:20:26.994902  248170 fix.go:54] fixHost starting: 
	I1126 20:20:26.995087  248170 cli_runner.go:164] Run: docker container inspect old-k8s-version-157431 --format={{.State.Status}}
	I1126 20:20:27.011470  248170 fix.go:112] recreateIfNeeded on old-k8s-version-157431: state=Stopped err=<nil>
	W1126 20:20:27.011495  248170 fix.go:138] unexpected machine state, will restart: <nil>
	I1126 20:20:23.990794  216504 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1126 20:20:23.991224  216504 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1126 20:20:23.991272  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:20:23.991315  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:20:24.024690  216504 cri.go:89] found id: "904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823"
	I1126 20:20:24.024717  216504 cri.go:89] found id: ""
	I1126 20:20:24.024727  216504 logs.go:282] 1 containers: [904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823]
	I1126 20:20:24.024783  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:24.028511  216504 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:20:24.028559  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:20:24.060663  216504 cri.go:89] found id: "e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280"
	I1126 20:20:24.060684  216504 cri.go:89] found id: ""
	I1126 20:20:24.060693  216504 logs.go:282] 1 containers: [e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280]
	I1126 20:20:24.060743  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:24.063992  216504 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:20:24.064039  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:20:24.095759  216504 cri.go:89] found id: ""
	I1126 20:20:24.095776  216504 logs.go:282] 0 containers: []
	W1126 20:20:24.095782  216504 logs.go:284] No container was found matching "coredns"
	I1126 20:20:24.095792  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:20:24.095842  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:20:24.127784  216504 cri.go:89] found id: "b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915"
	I1126 20:20:24.127807  216504 cri.go:89] found id: ""
	I1126 20:20:24.127816  216504 logs.go:282] 1 containers: [b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915]
	I1126 20:20:24.127864  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:24.131286  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:20:24.131336  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:20:24.163992  216504 cri.go:89] found id: ""
	I1126 20:20:24.164015  216504 logs.go:282] 0 containers: []
	W1126 20:20:24.164021  216504 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:20:24.164027  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:20:24.164074  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:20:24.195914  216504 cri.go:89] found id: "583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e"
	I1126 20:20:24.195933  216504 cri.go:89] found id: ""
	I1126 20:20:24.195944  216504 logs.go:282] 1 containers: [583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e]
	I1126 20:20:24.196003  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:24.199424  216504 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:20:24.199502  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:20:24.231398  216504 cri.go:89] found id: ""
	I1126 20:20:24.231420  216504 logs.go:282] 0 containers: []
	W1126 20:20:24.231427  216504 logs.go:284] No container was found matching "kindnet"
	I1126 20:20:24.231433  216504 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:20:24.231500  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:20:24.263657  216504 cri.go:89] found id: ""
	I1126 20:20:24.263682  216504 logs.go:282] 0 containers: []
	W1126 20:20:24.263692  216504 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:20:24.263708  216504 logs.go:123] Gathering logs for dmesg ...
	I1126 20:20:24.263718  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:20:24.279006  216504 logs.go:123] Gathering logs for kube-scheduler [b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915] ...
	I1126 20:20:24.279027  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915"
	I1126 20:20:24.345423  216504 logs.go:123] Gathering logs for kube-controller-manager [583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e] ...
	I1126 20:20:24.345447  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e"
	I1126 20:20:24.377909  216504 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:20:24.377932  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:20:24.419509  216504 logs.go:123] Gathering logs for kubelet ...
	I1126 20:20:24.419535  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:20:24.510033  216504 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:20:24.510063  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:20:24.568362  216504 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:20:24.568387  216504 logs.go:123] Gathering logs for kube-apiserver [904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823] ...
	I1126 20:20:24.568402  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823"
	I1126 20:20:24.604439  216504 logs.go:123] Gathering logs for etcd [e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280] ...
	I1126 20:20:24.604474  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280"
	I1126 20:20:24.636747  216504 logs.go:123] Gathering logs for container status ...
	I1126 20:20:24.636772  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:20:27.172522  216504 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1126 20:20:27.172878  216504 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1126 20:20:27.172933  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:20:27.172987  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:20:27.210365  216504 cri.go:89] found id: "904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823"
	I1126 20:20:27.210400  216504 cri.go:89] found id: ""
	I1126 20:20:27.210417  216504 logs.go:282] 1 containers: [904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823]
	I1126 20:20:27.210486  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:27.214451  216504 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:20:27.214542  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:20:27.249896  216504 cri.go:89] found id: "e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280"
	I1126 20:20:27.249917  216504 cri.go:89] found id: ""
	I1126 20:20:27.249927  216504 logs.go:282] 1 containers: [e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280]
	I1126 20:20:27.249975  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:27.253630  216504 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:20:27.253699  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:20:27.290543  216504 cri.go:89] found id: ""
	I1126 20:20:27.290569  216504 logs.go:282] 0 containers: []
	W1126 20:20:27.290577  216504 logs.go:284] No container was found matching "coredns"
	I1126 20:20:27.290585  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:20:27.290636  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:20:27.329319  216504 cri.go:89] found id: "b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915"
	I1126 20:20:27.329344  216504 cri.go:89] found id: ""
	I1126 20:20:27.329354  216504 logs.go:282] 1 containers: [b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915]
	I1126 20:20:27.329415  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:27.333334  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:20:27.333385  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:20:27.369006  216504 cri.go:89] found id: ""
	I1126 20:20:27.369029  216504 logs.go:282] 0 containers: []
	W1126 20:20:27.369039  216504 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:20:27.369046  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:20:27.369090  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:20:27.404435  216504 cri.go:89] found id: "583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e"
	I1126 20:20:27.404478  216504 cri.go:89] found id: ""
	I1126 20:20:27.404488  216504 logs.go:282] 1 containers: [583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e]
	I1126 20:20:27.404545  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:27.408036  216504 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:20:27.408086  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:20:27.447986  216504 cri.go:89] found id: ""
	I1126 20:20:27.448011  216504 logs.go:282] 0 containers: []
	W1126 20:20:27.448020  216504 logs.go:284] No container was found matching "kindnet"
	I1126 20:20:27.448028  216504 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:20:27.448089  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:20:27.481175  216504 cri.go:89] found id: ""
	I1126 20:20:27.481195  216504 logs.go:282] 0 containers: []
	W1126 20:20:27.481202  216504 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:20:27.481215  216504 logs.go:123] Gathering logs for container status ...
	I1126 20:20:27.481225  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:20:27.525560  216504 logs.go:123] Gathering logs for kubelet ...
	I1126 20:20:27.525584  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:20:27.630135  216504 logs.go:123] Gathering logs for kube-apiserver [904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823] ...
	I1126 20:20:27.630168  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823"
	I1126 20:20:27.669450  216504 logs.go:123] Gathering logs for etcd [e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280] ...
	I1126 20:20:27.669490  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280"
	I1126 20:20:27.702147  216504 logs.go:123] Gathering logs for kube-controller-manager [583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e] ...
	I1126 20:20:27.702177  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e"
	I1126 20:20:27.735479  216504 logs.go:123] Gathering logs for dmesg ...
	I1126 20:20:27.735508  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:20:27.749952  216504 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:20:27.749979  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:20:27.808579  216504 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:20:27.808598  216504 logs.go:123] Gathering logs for kube-scheduler [b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915] ...
	I1126 20:20:27.808610  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915"
	I1126 20:20:27.877986  216504 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:20:27.878021  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:20:25.870921  211567 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1126 20:20:25.871272  211567 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1126 20:20:25.871318  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:20:25.871362  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:20:25.897016  211567 cri.go:89] found id: "b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf"
	I1126 20:20:25.897032  211567 cri.go:89] found id: ""
	I1126 20:20:25.897039  211567 logs.go:282] 1 containers: [b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf]
	I1126 20:20:25.897079  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:20:25.900793  211567 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:20:25.900848  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:20:25.925270  211567 cri.go:89] found id: ""
	I1126 20:20:25.925295  211567 logs.go:282] 0 containers: []
	W1126 20:20:25.925302  211567 logs.go:284] No container was found matching "etcd"
	I1126 20:20:25.925307  211567 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:20:25.925356  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:20:25.949769  211567 cri.go:89] found id: ""
	I1126 20:20:25.949791  211567 logs.go:282] 0 containers: []
	W1126 20:20:25.949801  211567 logs.go:284] No container was found matching "coredns"
	I1126 20:20:25.949807  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:20:25.949854  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:20:25.974141  211567 cri.go:89] found id: "b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3"
	I1126 20:20:25.974163  211567 cri.go:89] found id: ""
	I1126 20:20:25.974173  211567 logs.go:282] 1 containers: [b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3]
	I1126 20:20:25.974217  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:20:25.977749  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:20:25.977793  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:20:26.001378  211567 cri.go:89] found id: ""
	I1126 20:20:26.001399  211567 logs.go:282] 0 containers: []
	W1126 20:20:26.001410  211567 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:20:26.001416  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:20:26.001475  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:20:26.025142  211567 cri.go:89] found id: "cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e"
	I1126 20:20:26.025156  211567 cri.go:89] found id: ""
	I1126 20:20:26.025168  211567 logs.go:282] 1 containers: [cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e]
	I1126 20:20:26.025211  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:20:26.028851  211567 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:20:26.028906  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:20:26.053121  211567 cri.go:89] found id: ""
	I1126 20:20:26.053141  211567 logs.go:282] 0 containers: []
	W1126 20:20:26.053150  211567 logs.go:284] No container was found matching "kindnet"
	I1126 20:20:26.053157  211567 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:20:26.053200  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:20:26.078227  211567 cri.go:89] found id: ""
	I1126 20:20:26.078243  211567 logs.go:282] 0 containers: []
	W1126 20:20:26.078250  211567 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:20:26.078257  211567 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:20:26.078266  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:20:26.131017  211567 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:20:26.131039  211567 logs.go:123] Gathering logs for kube-apiserver [b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf] ...
	I1126 20:20:26.131053  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf"
	I1126 20:20:26.164080  211567 logs.go:123] Gathering logs for kube-scheduler [b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3] ...
	I1126 20:20:26.164110  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3"
	I1126 20:20:26.215069  211567 logs.go:123] Gathering logs for kube-controller-manager [cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e] ...
	I1126 20:20:26.215099  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e"
	I1126 20:20:26.244701  211567 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:20:26.244727  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:20:26.290317  211567 logs.go:123] Gathering logs for container status ...
	I1126 20:20:26.290342  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:20:26.319494  211567 logs.go:123] Gathering logs for kubelet ...
	I1126 20:20:26.319517  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:20:26.408302  211567 logs.go:123] Gathering logs for dmesg ...
	I1126 20:20:26.408327  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:20:28.922377  211567 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1126 20:20:28.922764  211567 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1126 20:20:28.922828  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:20:28.922889  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:20:28.948750  211567 cri.go:89] found id: "b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf"
	I1126 20:20:28.948770  211567 cri.go:89] found id: ""
	I1126 20:20:28.948781  211567 logs.go:282] 1 containers: [b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf]
	I1126 20:20:28.948837  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:20:28.952682  211567 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:20:28.952739  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:20:28.976587  211567 cri.go:89] found id: ""
	I1126 20:20:28.976611  211567 logs.go:282] 0 containers: []
	W1126 20:20:28.976620  211567 logs.go:284] No container was found matching "etcd"
	I1126 20:20:28.976627  211567 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:20:28.976679  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:20:29.000078  211567 cri.go:89] found id: ""
	I1126 20:20:29.000099  211567 logs.go:282] 0 containers: []
	W1126 20:20:29.000109  211567 logs.go:284] No container was found matching "coredns"
	I1126 20:20:29.000116  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:20:29.000160  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:20:29.024173  211567 cri.go:89] found id: "b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3"
	I1126 20:20:29.024190  211567 cri.go:89] found id: ""
	I1126 20:20:29.024197  211567 logs.go:282] 1 containers: [b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3]
	I1126 20:20:29.024238  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:20:29.027691  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:20:29.027749  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:20:29.053216  211567 cri.go:89] found id: ""
	I1126 20:20:29.053239  211567 logs.go:282] 0 containers: []
	W1126 20:20:29.053252  211567 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:20:29.053257  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:20:29.053310  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:20:29.077339  211567 cri.go:89] found id: "cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e"
	I1126 20:20:29.077360  211567 cri.go:89] found id: ""
	I1126 20:20:29.077369  211567 logs.go:282] 1 containers: [cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e]
	I1126 20:20:29.077420  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:20:29.080947  211567 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:20:29.081000  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:20:29.104636  211567 cri.go:89] found id: ""
	I1126 20:20:29.104657  211567 logs.go:282] 0 containers: []
	W1126 20:20:29.104663  211567 logs.go:284] No container was found matching "kindnet"
	I1126 20:20:29.104668  211567 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:20:29.104707  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:20:29.129948  211567 cri.go:89] found id: ""
	I1126 20:20:29.129965  211567 logs.go:282] 0 containers: []
	W1126 20:20:29.129972  211567 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:20:29.129980  211567 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:20:29.129988  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:20:29.174507  211567 logs.go:123] Gathering logs for container status ...
	I1126 20:20:29.174528  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:20:29.202589  211567 logs.go:123] Gathering logs for kubelet ...
	I1126 20:20:29.202609  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:20:29.285682  211567 logs.go:123] Gathering logs for dmesg ...
	I1126 20:20:29.285706  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:20:29.298921  211567 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:20:29.298945  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:20:29.350940  211567 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:20:29.350962  211567 logs.go:123] Gathering logs for kube-apiserver [b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf] ...
	I1126 20:20:29.350974  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf"
	I1126 20:20:29.381555  211567 logs.go:123] Gathering logs for kube-scheduler [b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3] ...
	I1126 20:20:29.381579  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3"
	I1126 20:20:29.429852  211567 logs.go:123] Gathering logs for kube-controller-manager [cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e] ...
	I1126 20:20:29.429878  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e"
	I1126 20:20:27.013047  248170 out.go:252] * Restarting existing docker container for "old-k8s-version-157431" ...
	I1126 20:20:27.013106  248170 cli_runner.go:164] Run: docker start old-k8s-version-157431
	I1126 20:20:27.291654  248170 cli_runner.go:164] Run: docker container inspect old-k8s-version-157431 --format={{.State.Status}}
	I1126 20:20:27.310140  248170 kic.go:430] container "old-k8s-version-157431" state is running.
	I1126 20:20:27.310641  248170 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-157431
	I1126 20:20:27.330099  248170 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/old-k8s-version-157431/config.json ...
	I1126 20:20:27.330338  248170 machine.go:94] provisionDockerMachine start ...
	I1126 20:20:27.330424  248170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-157431
	I1126 20:20:27.349666  248170 main.go:143] libmachine: Using SSH client type: native
	I1126 20:20:27.349899  248170 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1126 20:20:27.349911  248170 main.go:143] libmachine: About to run SSH command:
	hostname
	I1126 20:20:27.350525  248170 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47394->127.0.0.1:33058: read: connection reset by peer
	I1126 20:20:30.490730  248170 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-157431
	
	I1126 20:20:30.490757  248170 ubuntu.go:182] provisioning hostname "old-k8s-version-157431"
	I1126 20:20:30.490824  248170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-157431
	I1126 20:20:30.509658  248170 main.go:143] libmachine: Using SSH client type: native
	I1126 20:20:30.509921  248170 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1126 20:20:30.509942  248170 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-157431 && echo "old-k8s-version-157431" | sudo tee /etc/hostname
	I1126 20:20:30.662884  248170 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-157431
	
	I1126 20:20:30.662962  248170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-157431
	I1126 20:20:30.682002  248170 main.go:143] libmachine: Using SSH client type: native
	I1126 20:20:30.682227  248170 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1126 20:20:30.682245  248170 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-157431' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-157431/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-157431' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1126 20:20:30.823849  248170 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1126 20:20:30.823876  248170 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21974-10722/.minikube CaCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21974-10722/.minikube}
	I1126 20:20:30.823925  248170 ubuntu.go:190] setting up certificates
	I1126 20:20:30.823943  248170 provision.go:84] configureAuth start
	I1126 20:20:30.824000  248170 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-157431
	I1126 20:20:30.841953  248170 provision.go:143] copyHostCerts
	I1126 20:20:30.842006  248170 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-10722/.minikube/key.pem, removing ...
	I1126 20:20:30.842015  248170 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-10722/.minikube/key.pem
	I1126 20:20:30.842085  248170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/key.pem (1675 bytes)
	I1126 20:20:30.842196  248170 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-10722/.minikube/ca.pem, removing ...
	I1126 20:20:30.842207  248170 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-10722/.minikube/ca.pem
	I1126 20:20:30.842249  248170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/ca.pem (1078 bytes)
	I1126 20:20:30.842318  248170 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-10722/.minikube/cert.pem, removing ...
	I1126 20:20:30.842329  248170 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-10722/.minikube/cert.pem
	I1126 20:20:30.842365  248170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/cert.pem (1123 bytes)
	I1126 20:20:30.842429  248170 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-157431 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-157431]
	I1126 20:20:30.964785  248170 provision.go:177] copyRemoteCerts
	I1126 20:20:30.964842  248170 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1126 20:20:30.964872  248170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-157431
	I1126 20:20:30.983629  248170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/old-k8s-version-157431/id_rsa Username:docker}
	I1126 20:20:31.084260  248170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1126 20:20:31.100853  248170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1126 20:20:31.117240  248170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1126 20:20:31.133209  248170 provision.go:87] duration metric: took 309.257526ms to configureAuth
	I1126 20:20:31.133229  248170 ubuntu.go:206] setting minikube options for container-runtime
	I1126 20:20:31.133370  248170 config.go:182] Loaded profile config "old-k8s-version-157431": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1126 20:20:31.133447  248170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-157431
	I1126 20:20:31.151201  248170 main.go:143] libmachine: Using SSH client type: native
	I1126 20:20:31.151439  248170 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1126 20:20:31.151471  248170 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1126 20:20:31.455505  248170 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1126 20:20:31.455534  248170 machine.go:97] duration metric: took 4.125179201s to provisionDockerMachine
	I1126 20:20:31.455549  248170 start.go:293] postStartSetup for "old-k8s-version-157431" (driver="docker")
	I1126 20:20:31.455562  248170 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1126 20:20:31.455633  248170 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1126 20:20:31.455676  248170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-157431
	I1126 20:20:31.473882  248170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/old-k8s-version-157431/id_rsa Username:docker}
	I1126 20:20:31.570817  248170 ssh_runner.go:195] Run: cat /etc/os-release
	I1126 20:20:31.574103  248170 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1126 20:20:31.574147  248170 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1126 20:20:31.574156  248170 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-10722/.minikube/addons for local assets ...
	I1126 20:20:31.574199  248170 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-10722/.minikube/files for local assets ...
	I1126 20:20:31.574265  248170 filesync.go:149] local asset: /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem -> 142582.pem in /etc/ssl/certs
	I1126 20:20:31.574344  248170 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1126 20:20:31.581304  248170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem --> /etc/ssl/certs/142582.pem (1708 bytes)
	I1126 20:20:31.598131  248170 start.go:296] duration metric: took 142.56904ms for postStartSetup
	I1126 20:20:31.598197  248170 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1126 20:20:31.598237  248170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-157431
	I1126 20:20:31.616517  248170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/old-k8s-version-157431/id_rsa Username:docker}
	I1126 20:20:31.709913  248170 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1126 20:20:31.714258  248170 fix.go:56] duration metric: took 4.719351205s for fixHost
	I1126 20:20:31.714280  248170 start.go:83] releasing machines lock for "old-k8s-version-157431", held for 4.719390513s
	I1126 20:20:31.714382  248170 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-157431
	I1126 20:20:31.731738  248170 ssh_runner.go:195] Run: cat /version.json
	I1126 20:20:31.731802  248170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-157431
	I1126 20:20:31.731829  248170 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1126 20:20:31.731897  248170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-157431
	I1126 20:20:31.749673  248170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/old-k8s-version-157431/id_rsa Username:docker}
	I1126 20:20:31.750007  248170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/old-k8s-version-157431/id_rsa Username:docker}
	I1126 20:20:31.900881  248170 ssh_runner.go:195] Run: systemctl --version
	I1126 20:20:31.906913  248170 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1126 20:20:31.940752  248170 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1126 20:20:31.944950  248170 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1126 20:20:31.945002  248170 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1126 20:20:31.952392  248170 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1126 20:20:31.952407  248170 start.go:496] detecting cgroup driver to use...
	I1126 20:20:31.952432  248170 detect.go:190] detected "systemd" cgroup driver on host os
	I1126 20:20:31.952499  248170 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1126 20:20:31.967165  248170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1126 20:20:31.979205  248170 docker.go:218] disabling cri-docker service (if available) ...
	I1126 20:20:31.979254  248170 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1126 20:20:31.993300  248170 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1126 20:20:32.006014  248170 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1126 20:20:32.093726  248170 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1126 20:20:32.176418  248170 docker.go:234] disabling docker service ...
	I1126 20:20:32.176490  248170 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1126 20:20:32.192162  248170 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1126 20:20:32.203912  248170 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1126 20:20:32.287262  248170 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1126 20:20:32.384389  248170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1126 20:20:32.396965  248170 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1126 20:20:32.411369  248170 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1126 20:20:32.411427  248170 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:20:32.419960  248170 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1126 20:20:32.420016  248170 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:20:32.428418  248170 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:20:32.436761  248170 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:20:32.446252  248170 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1126 20:20:32.453785  248170 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:20:32.461942  248170 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:20:32.469776  248170 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:20:32.477996  248170 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1126 20:20:32.485125  248170 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1126 20:20:32.493012  248170 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:20:32.572909  248170 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1126 20:20:32.706653  248170 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1126 20:20:32.706704  248170 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1126 20:20:32.710486  248170 start.go:564] Will wait 60s for crictl version
	I1126 20:20:32.710540  248170 ssh_runner.go:195] Run: which crictl
	I1126 20:20:32.713975  248170 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1126 20:20:32.738237  248170 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1126 20:20:32.738295  248170 ssh_runner.go:195] Run: crio --version
	I1126 20:20:32.765122  248170 ssh_runner.go:195] Run: crio --version
	I1126 20:20:32.793081  248170 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.2 ...
	I1126 20:20:30.422400  216504 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1126 20:20:30.422751  216504 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1126 20:20:30.422799  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:20:30.422849  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:20:30.456109  216504 cri.go:89] found id: "904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823"
	I1126 20:20:30.456136  216504 cri.go:89] found id: ""
	I1126 20:20:30.456146  216504 logs.go:282] 1 containers: [904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823]
	I1126 20:20:30.456196  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:30.459577  216504 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:20:30.459627  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:20:30.492790  216504 cri.go:89] found id: "e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280"
	I1126 20:20:30.492811  216504 cri.go:89] found id: ""
	I1126 20:20:30.492820  216504 logs.go:282] 1 containers: [e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280]
	I1126 20:20:30.492868  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:30.496478  216504 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:20:30.496541  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:20:30.531548  216504 cri.go:89] found id: ""
	I1126 20:20:30.531574  216504 logs.go:282] 0 containers: []
	W1126 20:20:30.531584  216504 logs.go:284] No container was found matching "coredns"
	I1126 20:20:30.531592  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:20:30.531644  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:20:30.565649  216504 cri.go:89] found id: "b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915"
	I1126 20:20:30.565672  216504 cri.go:89] found id: ""
	I1126 20:20:30.565683  216504 logs.go:282] 1 containers: [b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915]
	I1126 20:20:30.565742  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:30.569586  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:20:30.569644  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:20:30.602566  216504 cri.go:89] found id: ""
	I1126 20:20:30.602591  216504 logs.go:282] 0 containers: []
	W1126 20:20:30.602600  216504 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:20:30.602609  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:20:30.602661  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:20:30.634966  216504 cri.go:89] found id: "583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e"
	I1126 20:20:30.634991  216504 cri.go:89] found id: ""
	I1126 20:20:30.635000  216504 logs.go:282] 1 containers: [583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e]
	I1126 20:20:30.635039  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:30.638480  216504 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:20:30.638537  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:20:30.672199  216504 cri.go:89] found id: ""
	I1126 20:20:30.672222  216504 logs.go:282] 0 containers: []
	W1126 20:20:30.672231  216504 logs.go:284] No container was found matching "kindnet"
	I1126 20:20:30.672238  216504 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:20:30.672295  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:20:30.706092  216504 cri.go:89] found id: ""
	I1126 20:20:30.706115  216504 logs.go:282] 0 containers: []
	W1126 20:20:30.706125  216504 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:20:30.706141  216504 logs.go:123] Gathering logs for kube-apiserver [904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823] ...
	I1126 20:20:30.706155  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823"
	I1126 20:20:30.743705  216504 logs.go:123] Gathering logs for etcd [e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280] ...
	I1126 20:20:30.743729  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280"
	I1126 20:20:30.776900  216504 logs.go:123] Gathering logs for kube-scheduler [b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915] ...
	I1126 20:20:30.776929  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915"
	I1126 20:20:30.848539  216504 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:20:30.848563  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:20:30.894402  216504 logs.go:123] Gathering logs for dmesg ...
	I1126 20:20:30.894428  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:20:30.910002  216504 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:20:30.910030  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:20:30.967630  216504 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:20:30.967650  216504 logs.go:123] Gathering logs for kube-controller-manager [583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e] ...
	I1126 20:20:30.967662  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e"
	I1126 20:20:31.002705  216504 logs.go:123] Gathering logs for container status ...
	I1126 20:20:31.002734  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:20:31.039835  216504 logs.go:123] Gathering logs for kubelet ...
	I1126 20:20:31.039863  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:20:32.794207  248170 cli_runner.go:164] Run: docker network inspect old-k8s-version-157431 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1126 20:20:32.811852  248170 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1126 20:20:32.815629  248170 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:20:32.825013  248170 kubeadm.go:884] updating cluster {Name:old-k8s-version-157431 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-157431 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1126 20:20:32.825113  248170 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1126 20:20:32.825160  248170 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:20:32.855891  248170 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:20:32.855919  248170 crio.go:433] Images already preloaded, skipping extraction
	I1126 20:20:32.855964  248170 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:20:32.880233  248170 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:20:32.880251  248170 cache_images.go:86] Images are preloaded, skipping loading
	I1126 20:20:32.880258  248170 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I1126 20:20:32.880341  248170 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-157431 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-157431 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1126 20:20:32.880397  248170 ssh_runner.go:195] Run: crio config
	I1126 20:20:32.923527  248170 cni.go:84] Creating CNI manager for ""
	I1126 20:20:32.923550  248170 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:20:32.923566  248170 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1126 20:20:32.923596  248170 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-157431 NodeName:old-k8s-version-157431 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1126 20:20:32.923748  248170 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-157431"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1126 20:20:32.923825  248170 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1126 20:20:32.931445  248170 binaries.go:51] Found k8s binaries, skipping transfer
	I1126 20:20:32.931499  248170 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1126 20:20:32.938558  248170 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1126 20:20:32.949970  248170 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1126 20:20:32.961240  248170 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1126 20:20:32.972576  248170 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1126 20:20:32.975767  248170 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:20:32.984949  248170 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:20:33.064452  248170 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:20:33.091343  248170 certs.go:69] Setting up /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/old-k8s-version-157431 for IP: 192.168.76.2
	I1126 20:20:33.091378  248170 certs.go:195] generating shared ca certs ...
	I1126 20:20:33.091394  248170 certs.go:227] acquiring lock for ca certs: {Name:mk4795cc985d88cb969cdd6a9c35d3c72f02dfea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:20:33.091556  248170 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21974-10722/.minikube/ca.key
	I1126 20:20:33.091622  248170 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.key
	I1126 20:20:33.091635  248170 certs.go:257] generating profile certs ...
	I1126 20:20:33.091741  248170 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/old-k8s-version-157431/client.key
	I1126 20:20:33.091818  248170 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/old-k8s-version-157431/apiserver.key.162086cc
	I1126 20:20:33.091880  248170 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/old-k8s-version-157431/proxy-client.key
	I1126 20:20:33.092015  248170 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/14258.pem (1338 bytes)
	W1126 20:20:33.092057  248170 certs.go:480] ignoring /home/jenkins/minikube-integration/21974-10722/.minikube/certs/14258_empty.pem, impossibly tiny 0 bytes
	I1126 20:20:33.092067  248170 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem (1679 bytes)
	I1126 20:20:33.092096  248170 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem (1078 bytes)
	I1126 20:20:33.092126  248170 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem (1123 bytes)
	I1126 20:20:33.092149  248170 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem (1675 bytes)
	I1126 20:20:33.092193  248170 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem (1708 bytes)
	I1126 20:20:33.092865  248170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1126 20:20:33.110409  248170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1126 20:20:33.127547  248170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1126 20:20:33.144547  248170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1126 20:20:33.163617  248170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/old-k8s-version-157431/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1126 20:20:33.183947  248170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/old-k8s-version-157431/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1126 20:20:33.200265  248170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/old-k8s-version-157431/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1126 20:20:33.216501  248170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/old-k8s-version-157431/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1126 20:20:33.232558  248170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/certs/14258.pem --> /usr/share/ca-certificates/14258.pem (1338 bytes)
	I1126 20:20:33.249003  248170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem --> /usr/share/ca-certificates/142582.pem (1708 bytes)
	I1126 20:20:33.265426  248170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1126 20:20:33.282385  248170 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1126 20:20:33.294216  248170 ssh_runner.go:195] Run: openssl version
	I1126 20:20:33.300181  248170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142582.pem && ln -fs /usr/share/ca-certificates/142582.pem /etc/ssl/certs/142582.pem"
	I1126 20:20:33.307809  248170 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142582.pem
	I1126 20:20:33.311252  248170 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 26 19:41 /usr/share/ca-certificates/142582.pem
	I1126 20:20:33.311299  248170 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142582.pem
	I1126 20:20:33.345009  248170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142582.pem /etc/ssl/certs/3ec20f2e.0"
	I1126 20:20:33.352087  248170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1126 20:20:33.359914  248170 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:20:33.363445  248170 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 26 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:20:33.363539  248170 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:20:33.398869  248170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1126 20:20:33.408555  248170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14258.pem && ln -fs /usr/share/ca-certificates/14258.pem /etc/ssl/certs/14258.pem"
	I1126 20:20:33.416495  248170 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14258.pem
	I1126 20:20:33.419928  248170 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 26 19:41 /usr/share/ca-certificates/14258.pem
	I1126 20:20:33.419965  248170 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14258.pem
	I1126 20:20:33.453474  248170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14258.pem /etc/ssl/certs/51391683.0"
	I1126 20:20:33.461096  248170 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1126 20:20:33.464603  248170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1126 20:20:33.498554  248170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1126 20:20:33.532660  248170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1126 20:20:33.566798  248170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1126 20:20:33.608505  248170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1126 20:20:33.651676  248170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1126 20:20:33.698809  248170 kubeadm.go:401] StartCluster: {Name:old-k8s-version-157431 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-157431 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:20:33.698923  248170 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 20:20:33.699096  248170 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 20:20:33.739772  248170 cri.go:89] found id: "a504f533180fa3660ae26e19df70e0f8b5e648c9eac05e818e5a34d99dfa556d"
	I1126 20:20:33.739816  248170 cri.go:89] found id: "d8d8479be421b12e08c13d54e96b2828f60bf10165cebecd5a7fe6990720f66d"
	I1126 20:20:33.739822  248170 cri.go:89] found id: "9646e408ccc61c8cc3bf687319bc9e134650a818bbe9f04715d5a74998506dc6"
	I1126 20:20:33.739827  248170 cri.go:89] found id: "abbeedf1745d559c442cf4064712a5660d93918ca1d77d450979a8d7cb48fd5b"
	I1126 20:20:33.739840  248170 cri.go:89] found id: ""
	I1126 20:20:33.739911  248170 ssh_runner.go:195] Run: sudo runc list -f json
	W1126 20:20:33.755015  248170 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:20:33Z" level=error msg="open /run/runc: no such file or directory"
	I1126 20:20:33.755092  248170 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1126 20:20:33.764003  248170 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1126 20:20:33.764021  248170 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1126 20:20:33.764074  248170 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1126 20:20:33.773026  248170 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1126 20:20:33.774755  248170 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-157431" does not appear in /home/jenkins/minikube-integration/21974-10722/kubeconfig
	I1126 20:20:33.775386  248170 kubeconfig.go:62] /home/jenkins/minikube-integration/21974-10722/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-157431" cluster setting kubeconfig missing "old-k8s-version-157431" context setting]
	I1126 20:20:33.776273  248170 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/kubeconfig: {Name:mk6fc897a3190afd853d8f239c80394df10dbd39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:20:33.778289  248170 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1126 20:20:33.787041  248170 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1126 20:20:33.787072  248170 kubeadm.go:602] duration metric: took 23.044617ms to restartPrimaryControlPlane
	I1126 20:20:33.787081  248170 kubeadm.go:403] duration metric: took 88.29816ms to StartCluster
	I1126 20:20:33.787096  248170 settings.go:142] acquiring lock: {Name:mkeab3f1dbcfb88a16fda32bff13c520a9a811ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:20:33.787149  248170 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21974-10722/kubeconfig
	I1126 20:20:33.788872  248170 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/kubeconfig: {Name:mk6fc897a3190afd853d8f239c80394df10dbd39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:20:33.789105  248170 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1126 20:20:33.789312  248170 config.go:182] Loaded profile config "old-k8s-version-157431": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1126 20:20:33.789355  248170 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1126 20:20:33.789429  248170 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-157431"
	I1126 20:20:33.789445  248170 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-157431"
	W1126 20:20:33.789453  248170 addons.go:248] addon storage-provisioner should already be in state true
	I1126 20:20:33.789502  248170 host.go:66] Checking if "old-k8s-version-157431" exists ...
	I1126 20:20:33.789627  248170 addons.go:70] Setting dashboard=true in profile "old-k8s-version-157431"
	I1126 20:20:33.789637  248170 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-157431"
	I1126 20:20:33.789654  248170 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-157431"
	I1126 20:20:33.789654  248170 addons.go:239] Setting addon dashboard=true in "old-k8s-version-157431"
	W1126 20:20:33.789664  248170 addons.go:248] addon dashboard should already be in state true
	I1126 20:20:33.789691  248170 host.go:66] Checking if "old-k8s-version-157431" exists ...
	I1126 20:20:33.789982  248170 cli_runner.go:164] Run: docker container inspect old-k8s-version-157431 --format={{.State.Status}}
	I1126 20:20:33.790003  248170 cli_runner.go:164] Run: docker container inspect old-k8s-version-157431 --format={{.State.Status}}
	I1126 20:20:33.790137  248170 cli_runner.go:164] Run: docker container inspect old-k8s-version-157431 --format={{.State.Status}}
	I1126 20:20:33.791581  248170 out.go:179] * Verifying Kubernetes components...
	I1126 20:20:33.792877  248170 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:20:33.819582  248170 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1126 20:20:33.820878  248170 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-157431"
	W1126 20:20:33.820901  248170 addons.go:248] addon default-storageclass should already be in state true
	I1126 20:20:33.820929  248170 host.go:66] Checking if "old-k8s-version-157431" exists ...
	I1126 20:20:33.820884  248170 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1126 20:20:33.821495  248170 cli_runner.go:164] Run: docker container inspect old-k8s-version-157431 --format={{.State.Status}}
	I1126 20:20:33.822088  248170 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:20:33.822107  248170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1126 20:20:33.822157  248170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-157431
	I1126 20:20:33.822208  248170 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1126 20:20:31.955584  211567 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1126 20:20:31.955935  211567 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1126 20:20:31.955980  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:20:31.956017  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:20:31.981839  211567 cri.go:89] found id: "b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf"
	I1126 20:20:31.981853  211567 cri.go:89] found id: ""
	I1126 20:20:31.981860  211567 logs.go:282] 1 containers: [b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf]
	I1126 20:20:31.981910  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:20:31.985349  211567 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:20:31.985405  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:20:32.011443  211567 cri.go:89] found id: ""
	I1126 20:20:32.011479  211567 logs.go:282] 0 containers: []
	W1126 20:20:32.011489  211567 logs.go:284] No container was found matching "etcd"
	I1126 20:20:32.011497  211567 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:20:32.011539  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:20:32.044197  211567 cri.go:89] found id: ""
	I1126 20:20:32.044223  211567 logs.go:282] 0 containers: []
	W1126 20:20:32.044233  211567 logs.go:284] No container was found matching "coredns"
	I1126 20:20:32.044241  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:20:32.044296  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:20:32.070600  211567 cri.go:89] found id: "b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3"
	I1126 20:20:32.070621  211567 cri.go:89] found id: ""
	I1126 20:20:32.070629  211567 logs.go:282] 1 containers: [b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3]
	I1126 20:20:32.070681  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:20:32.074192  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:20:32.074246  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:20:32.099706  211567 cri.go:89] found id: ""
	I1126 20:20:32.099730  211567 logs.go:282] 0 containers: []
	W1126 20:20:32.099740  211567 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:20:32.099747  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:20:32.099799  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:20:32.133396  211567 cri.go:89] found id: "cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e"
	I1126 20:20:32.133415  211567 cri.go:89] found id: ""
	I1126 20:20:32.133423  211567 logs.go:282] 1 containers: [cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e]
	I1126 20:20:32.133478  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:20:32.137016  211567 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:20:32.137074  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:20:32.162767  211567 cri.go:89] found id: ""
	I1126 20:20:32.162792  211567 logs.go:282] 0 containers: []
	W1126 20:20:32.162802  211567 logs.go:284] No container was found matching "kindnet"
	I1126 20:20:32.162809  211567 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:20:32.162858  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:20:32.186715  211567 cri.go:89] found id: ""
	I1126 20:20:32.186737  211567 logs.go:282] 0 containers: []
	W1126 20:20:32.186746  211567 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:20:32.186756  211567 logs.go:123] Gathering logs for container status ...
	I1126 20:20:32.186769  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:20:32.215504  211567 logs.go:123] Gathering logs for kubelet ...
	I1126 20:20:32.215526  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:20:32.304110  211567 logs.go:123] Gathering logs for dmesg ...
	I1126 20:20:32.304134  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:20:32.321643  211567 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:20:32.321676  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:20:32.385782  211567 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:20:32.385806  211567 logs.go:123] Gathering logs for kube-apiserver [b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf] ...
	I1126 20:20:32.385820  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf"
	I1126 20:20:32.417595  211567 logs.go:123] Gathering logs for kube-scheduler [b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3] ...
	I1126 20:20:32.417617  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3"
	I1126 20:20:32.467212  211567 logs.go:123] Gathering logs for kube-controller-manager [cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e] ...
	I1126 20:20:32.467231  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e"
	I1126 20:20:32.494219  211567 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:20:32.494240  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:20:35.044528  211567 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1126 20:20:35.044912  211567 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1126 20:20:35.044958  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:20:35.045017  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:20:35.073438  211567 cri.go:89] found id: "b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf"
	I1126 20:20:35.073481  211567 cri.go:89] found id: ""
	I1126 20:20:35.073490  211567 logs.go:282] 1 containers: [b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf]
	I1126 20:20:35.073535  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:20:35.077572  211567 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:20:35.077628  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:20:35.107121  211567 cri.go:89] found id: ""
	I1126 20:20:35.107143  211567 logs.go:282] 0 containers: []
	W1126 20:20:35.107153  211567 logs.go:284] No container was found matching "etcd"
	I1126 20:20:35.107160  211567 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:20:35.107201  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:20:35.135791  211567 cri.go:89] found id: ""
	I1126 20:20:35.135813  211567 logs.go:282] 0 containers: []
	W1126 20:20:35.135820  211567 logs.go:284] No container was found matching "coredns"
	I1126 20:20:35.135825  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:20:35.135869  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:20:35.163256  211567 cri.go:89] found id: "b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3"
	I1126 20:20:35.163273  211567 cri.go:89] found id: ""
	I1126 20:20:35.163280  211567 logs.go:282] 1 containers: [b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3]
	I1126 20:20:35.163330  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:20:35.167367  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:20:35.167423  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:20:35.195191  211567 cri.go:89] found id: ""
	I1126 20:20:35.195216  211567 logs.go:282] 0 containers: []
	W1126 20:20:35.195226  211567 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:20:35.195234  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:20:35.195289  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:20:35.226830  211567 cri.go:89] found id: "cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e"
	I1126 20:20:35.226852  211567 cri.go:89] found id: ""
	I1126 20:20:35.226862  211567 logs.go:282] 1 containers: [cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e]
	I1126 20:20:35.226925  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:20:35.230857  211567 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:20:35.230915  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:20:35.258294  211567 cri.go:89] found id: ""
	I1126 20:20:35.258321  211567 logs.go:282] 0 containers: []
	W1126 20:20:35.258331  211567 logs.go:284] No container was found matching "kindnet"
	I1126 20:20:35.258338  211567 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:20:35.258391  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:20:35.284947  211567 cri.go:89] found id: ""
	I1126 20:20:35.284971  211567 logs.go:282] 0 containers: []
	W1126 20:20:35.284980  211567 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:20:35.284990  211567 logs.go:123] Gathering logs for dmesg ...
	I1126 20:20:35.285003  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:20:35.306208  211567 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:20:35.306243  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:20:35.361699  211567 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:20:35.361715  211567 logs.go:123] Gathering logs for kube-apiserver [b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf] ...
	I1126 20:20:35.361728  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf"
	I1126 20:20:35.392730  211567 logs.go:123] Gathering logs for kube-scheduler [b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3] ...
	I1126 20:20:35.392755  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3"
	I1126 20:20:35.447705  211567 logs.go:123] Gathering logs for kube-controller-manager [cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e] ...
	I1126 20:20:35.447768  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e"
	I1126 20:20:35.480433  211567 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:20:35.480483  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:20:35.533519  211567 logs.go:123] Gathering logs for container status ...
	I1126 20:20:35.533555  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:20:35.582612  211567 logs.go:123] Gathering logs for kubelet ...
	I1126 20:20:35.582650  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:20:33.823358  248170 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1126 20:20:33.823385  248170 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1126 20:20:33.823439  248170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-157431
	I1126 20:20:33.861792  248170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/old-k8s-version-157431/id_rsa Username:docker}
	I1126 20:20:33.862130  248170 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1126 20:20:33.862145  248170 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1126 20:20:33.862194  248170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-157431
	I1126 20:20:33.863262  248170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/old-k8s-version-157431/id_rsa Username:docker}
	I1126 20:20:33.889227  248170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/old-k8s-version-157431/id_rsa Username:docker}
	I1126 20:20:33.966837  248170 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:20:33.985699  248170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:20:33.986015  248170 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-157431" to be "Ready" ...
	I1126 20:20:33.991356  248170 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1126 20:20:33.991375  248170 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1126 20:20:34.005603  248170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1126 20:20:34.009431  248170 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1126 20:20:34.009452  248170 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1126 20:20:34.025899  248170 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1126 20:20:34.025921  248170 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1126 20:20:34.045079  248170 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1126 20:20:34.045105  248170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1126 20:20:34.060675  248170 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1126 20:20:34.060698  248170 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1126 20:20:34.079295  248170 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1126 20:20:34.079334  248170 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1126 20:20:34.097084  248170 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1126 20:20:34.097107  248170 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1126 20:20:34.113196  248170 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1126 20:20:34.113218  248170 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1126 20:20:34.128736  248170 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1126 20:20:34.128756  248170 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1126 20:20:34.143961  248170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1126 20:20:35.686054  248170 node_ready.go:49] node "old-k8s-version-157431" is "Ready"
	I1126 20:20:35.686089  248170 node_ready.go:38] duration metric: took 1.700046128s for node "old-k8s-version-157431" to be "Ready" ...
	I1126 20:20:35.686105  248170 api_server.go:52] waiting for apiserver process to appear ...
	I1126 20:20:35.686159  248170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:20:36.322793  248170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.337044602s)
	I1126 20:20:36.322840  248170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.31720366s)
	I1126 20:20:36.610694  248170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.466693098s)
	I1126 20:20:36.610741  248170 api_server.go:72] duration metric: took 2.821605769s to wait for apiserver process to appear ...
	I1126 20:20:36.610766  248170 api_server.go:88] waiting for apiserver healthz status ...
	I1126 20:20:36.610841  248170 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1126 20:20:36.611981  248170 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-157431 addons enable metrics-server
	
	I1126 20:20:36.613173  248170 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1126 20:20:36.614200  248170 addons.go:530] duration metric: took 2.824848299s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1126 20:20:36.615523  248170 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1126 20:20:36.615542  248170 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1126 20:20:33.628862  216504 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1126 20:20:33.629231  216504 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1126 20:20:33.629283  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:20:33.629338  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:20:33.672480  216504 cri.go:89] found id: "904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823"
	I1126 20:20:33.672501  216504 cri.go:89] found id: ""
	I1126 20:20:33.672509  216504 logs.go:282] 1 containers: [904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823]
	I1126 20:20:33.672557  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:33.676724  216504 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:20:33.676782  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:20:33.726991  216504 cri.go:89] found id: "e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280"
	I1126 20:20:33.727020  216504 cri.go:89] found id: ""
	I1126 20:20:33.727030  216504 logs.go:282] 1 containers: [e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280]
	I1126 20:20:33.727087  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:33.732587  216504 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:20:33.732649  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:20:33.778747  216504 cri.go:89] found id: ""
	I1126 20:20:33.778769  216504 logs.go:282] 0 containers: []
	W1126 20:20:33.778778  216504 logs.go:284] No container was found matching "coredns"
	I1126 20:20:33.778786  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:20:33.778840  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:20:33.842067  216504 cri.go:89] found id: "b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915"
	I1126 20:20:33.842090  216504 cri.go:89] found id: ""
	I1126 20:20:33.842100  216504 logs.go:282] 1 containers: [b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915]
	I1126 20:20:33.842161  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:33.849118  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:20:33.849185  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:20:33.903924  216504 cri.go:89] found id: ""
	I1126 20:20:33.903954  216504 logs.go:282] 0 containers: []
	W1126 20:20:33.903964  216504 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:20:33.903971  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:20:33.904042  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:20:33.944988  216504 cri.go:89] found id: "583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e"
	I1126 20:20:33.945030  216504 cri.go:89] found id: ""
	I1126 20:20:33.945041  216504 logs.go:282] 1 containers: [583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e]
	I1126 20:20:33.945105  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:33.949184  216504 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:20:33.949243  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:20:33.992661  216504 cri.go:89] found id: ""
	I1126 20:20:33.992685  216504 logs.go:282] 0 containers: []
	W1126 20:20:33.992694  216504 logs.go:284] No container was found matching "kindnet"
	I1126 20:20:33.992701  216504 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:20:33.992750  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:20:34.042140  216504 cri.go:89] found id: ""
	I1126 20:20:34.042166  216504 logs.go:282] 0 containers: []
	W1126 20:20:34.042272  216504 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:20:34.042297  216504 logs.go:123] Gathering logs for container status ...
	I1126 20:20:34.042311  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:20:34.098637  216504 logs.go:123] Gathering logs for kube-apiserver [904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823] ...
	I1126 20:20:34.098670  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823"
	I1126 20:20:34.141122  216504 logs.go:123] Gathering logs for etcd [e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280] ...
	I1126 20:20:34.141147  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280"
	I1126 20:20:34.177226  216504 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:20:34.177256  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:20:34.231601  216504 logs.go:123] Gathering logs for kubelet ...
	I1126 20:20:34.231634  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:20:34.348270  216504 logs.go:123] Gathering logs for dmesg ...
	I1126 20:20:34.348300  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:20:34.364088  216504 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:20:34.364114  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:20:34.431906  216504 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:20:34.431929  216504 logs.go:123] Gathering logs for kube-scheduler [b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915] ...
	I1126 20:20:34.431943  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915"
	I1126 20:20:34.503579  216504 logs.go:123] Gathering logs for kube-controller-manager [583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e] ...
	I1126 20:20:34.503608  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e"
	I1126 20:20:37.040444  216504 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1126 20:20:37.040847  216504 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1126 20:20:37.040903  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:20:37.040961  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:20:37.074285  216504 cri.go:89] found id: "904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823"
	I1126 20:20:37.074305  216504 cri.go:89] found id: ""
	I1126 20:20:37.074315  216504 logs.go:282] 1 containers: [904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823]
	I1126 20:20:37.074356  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:37.078318  216504 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:20:37.078391  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:20:37.112759  216504 cri.go:89] found id: "e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280"
	I1126 20:20:37.112779  216504 cri.go:89] found id: ""
	I1126 20:20:37.112788  216504 logs.go:282] 1 containers: [e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280]
	I1126 20:20:37.112838  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:37.116909  216504 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:20:37.116964  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:20:37.154047  216504 cri.go:89] found id: ""
	I1126 20:20:37.154070  216504 logs.go:282] 0 containers: []
	W1126 20:20:37.154079  216504 logs.go:284] No container was found matching "coredns"
	I1126 20:20:37.154087  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:20:37.154134  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:20:37.187722  216504 cri.go:89] found id: "b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915"
	I1126 20:20:37.187742  216504 cri.go:89] found id: ""
	I1126 20:20:37.187749  216504 logs.go:282] 1 containers: [b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915]
	I1126 20:20:37.187796  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:37.191374  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:20:37.191434  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:20:37.230454  216504 cri.go:89] found id: ""
	I1126 20:20:37.230492  216504 logs.go:282] 0 containers: []
	W1126 20:20:37.230502  216504 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:20:37.230509  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:20:37.230564  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:20:37.263155  216504 cri.go:89] found id: "583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e"
	I1126 20:20:37.263180  216504 cri.go:89] found id: ""
	I1126 20:20:37.263190  216504 logs.go:282] 1 containers: [583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e]
	I1126 20:20:37.263246  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:37.267538  216504 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:20:37.267590  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:20:37.299187  216504 cri.go:89] found id: ""
	I1126 20:20:37.299206  216504 logs.go:282] 0 containers: []
	W1126 20:20:37.299212  216504 logs.go:284] No container was found matching "kindnet"
	I1126 20:20:37.299217  216504 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:20:37.299266  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:20:37.333307  216504 cri.go:89] found id: ""
	I1126 20:20:37.333327  216504 logs.go:282] 0 containers: []
	W1126 20:20:37.333337  216504 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:20:37.333355  216504 logs.go:123] Gathering logs for dmesg ...
	I1126 20:20:37.333372  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:20:37.347946  216504 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:20:37.347966  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:20:37.406097  216504 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:20:37.406117  216504 logs.go:123] Gathering logs for kube-apiserver [904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823] ...
	I1126 20:20:37.406133  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823"
	I1126 20:20:37.441416  216504 logs.go:123] Gathering logs for kube-scheduler [b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915] ...
	I1126 20:20:37.441438  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915"
	I1126 20:20:37.508962  216504 logs.go:123] Gathering logs for kube-controller-manager [583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e] ...
	I1126 20:20:37.508986  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e"
	I1126 20:20:37.541793  216504 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:20:37.541818  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:20:37.584139  216504 logs.go:123] Gathering logs for kubelet ...
	I1126 20:20:37.584165  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:20:37.671823  216504 logs.go:123] Gathering logs for etcd [e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280] ...
	I1126 20:20:37.671846  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280"
	I1126 20:20:37.708177  216504 logs.go:123] Gathering logs for container status ...
	I1126 20:20:37.708201  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:20:38.193908  211567 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1126 20:20:38.194295  211567 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1126 20:20:38.194343  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:20:38.194391  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:20:38.220311  211567 cri.go:89] found id: "b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf"
	I1126 20:20:38.220334  211567 cri.go:89] found id: ""
	I1126 20:20:38.220344  211567 logs.go:282] 1 containers: [b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf]
	I1126 20:20:38.220400  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:20:38.224100  211567 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:20:38.224162  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:20:38.248255  211567 cri.go:89] found id: ""
	I1126 20:20:38.248276  211567 logs.go:282] 0 containers: []
	W1126 20:20:38.248282  211567 logs.go:284] No container was found matching "etcd"
	I1126 20:20:38.248288  211567 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:20:38.248336  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:20:38.273947  211567 cri.go:89] found id: ""
	I1126 20:20:38.273976  211567 logs.go:282] 0 containers: []
	W1126 20:20:38.273983  211567 logs.go:284] No container was found matching "coredns"
	I1126 20:20:38.273991  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:20:38.274045  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:20:38.298131  211567 cri.go:89] found id: "b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3"
	I1126 20:20:38.298150  211567 cri.go:89] found id: ""
	I1126 20:20:38.298159  211567 logs.go:282] 1 containers: [b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3]
	I1126 20:20:38.298211  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:20:38.301689  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:20:38.301745  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:20:38.326529  211567 cri.go:89] found id: ""
	I1126 20:20:38.326546  211567 logs.go:282] 0 containers: []
	W1126 20:20:38.326552  211567 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:20:38.326557  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:20:38.326594  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:20:38.351071  211567 cri.go:89] found id: "cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e"
	I1126 20:20:38.351087  211567 cri.go:89] found id: ""
	I1126 20:20:38.351095  211567 logs.go:282] 1 containers: [cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e]
	I1126 20:20:38.351139  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:20:38.354585  211567 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:20:38.354629  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:20:38.378888  211567 cri.go:89] found id: ""
	I1126 20:20:38.378909  211567 logs.go:282] 0 containers: []
	W1126 20:20:38.378916  211567 logs.go:284] No container was found matching "kindnet"
	I1126 20:20:38.378922  211567 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:20:38.378962  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:20:38.403010  211567 cri.go:89] found id: ""
	I1126 20:20:38.403032  211567 logs.go:282] 0 containers: []
	W1126 20:20:38.403042  211567 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:20:38.403051  211567 logs.go:123] Gathering logs for container status ...
	I1126 20:20:38.403059  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:20:38.430387  211567 logs.go:123] Gathering logs for kubelet ...
	I1126 20:20:38.430407  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:20:38.519735  211567 logs.go:123] Gathering logs for dmesg ...
	I1126 20:20:38.519771  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:20:38.534287  211567 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:20:38.534314  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:20:38.586771  211567 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:20:38.586795  211567 logs.go:123] Gathering logs for kube-apiserver [b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf] ...
	I1126 20:20:38.586810  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf"
	I1126 20:20:38.617599  211567 logs.go:123] Gathering logs for kube-scheduler [b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3] ...
	I1126 20:20:38.617623  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3"
	I1126 20:20:38.667927  211567 logs.go:123] Gathering logs for kube-controller-manager [cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e] ...
	I1126 20:20:38.667949  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e"
	I1126 20:20:38.692943  211567 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:20:38.692967  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:20:37.111297  248170 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1126 20:20:37.115626  248170 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1126 20:20:37.116935  248170 api_server.go:141] control plane version: v1.28.0
	I1126 20:20:37.116959  248170 api_server.go:131] duration metric: took 506.134197ms to wait for apiserver health ...
	I1126 20:20:37.116975  248170 system_pods.go:43] waiting for kube-system pods to appear ...
	I1126 20:20:37.120863  248170 system_pods.go:59] 8 kube-system pods found
	I1126 20:20:37.120900  248170 system_pods.go:61] "coredns-5dd5756b68-jhrhx" [483a52cf-1d0a-4b51-b9b1-d986b07fa545] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:20:37.120908  248170 system_pods.go:61] "etcd-old-k8s-version-157431" [6b5ff917-ffe0-4bd0-bc19-2cbbcb7511c9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1126 20:20:37.120919  248170 system_pods.go:61] "kindnet-zlg4b" [9e7b6449-704d-42a1-863d-ec678f485d78] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1126 20:20:37.120925  248170 system_pods.go:61] "kube-apiserver-old-k8s-version-157431" [4755237d-9665-4f4a-acdb-7aad9f3a685f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1126 20:20:37.120930  248170 system_pods.go:61] "kube-controller-manager-old-k8s-version-157431" [d139f643-1c90-4ccb-9841-d5da46929720] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1126 20:20:37.120938  248170 system_pods.go:61] "kube-proxy-qqdfx" [896fd93b-917a-42b9-92db-283923830743] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1126 20:20:37.120943  248170 system_pods.go:61] "kube-scheduler-old-k8s-version-157431" [54a1cc6d-cdf5-4041-aa7e-7edda8f78380] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1126 20:20:37.120951  248170 system_pods.go:61] "storage-provisioner" [f6d6f6e0-74c6-4708-abff-c18f6962424e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 20:20:37.120959  248170 system_pods.go:74] duration metric: took 3.977406ms to wait for pod list to return data ...
	I1126 20:20:37.120971  248170 default_sa.go:34] waiting for default service account to be created ...
	I1126 20:20:37.122863  248170 default_sa.go:45] found service account: "default"
	I1126 20:20:37.122884  248170 default_sa.go:55] duration metric: took 1.903092ms for default service account to be created ...
	I1126 20:20:37.122894  248170 system_pods.go:116] waiting for k8s-apps to be running ...
	I1126 20:20:37.126064  248170 system_pods.go:86] 8 kube-system pods found
	I1126 20:20:37.126096  248170 system_pods.go:89] "coredns-5dd5756b68-jhrhx" [483a52cf-1d0a-4b51-b9b1-d986b07fa545] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:20:37.126108  248170 system_pods.go:89] "etcd-old-k8s-version-157431" [6b5ff917-ffe0-4bd0-bc19-2cbbcb7511c9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1126 20:20:37.126119  248170 system_pods.go:89] "kindnet-zlg4b" [9e7b6449-704d-42a1-863d-ec678f485d78] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1126 20:20:37.126132  248170 system_pods.go:89] "kube-apiserver-old-k8s-version-157431" [4755237d-9665-4f4a-acdb-7aad9f3a685f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1126 20:20:37.126141  248170 system_pods.go:89] "kube-controller-manager-old-k8s-version-157431" [d139f643-1c90-4ccb-9841-d5da46929720] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1126 20:20:37.126153  248170 system_pods.go:89] "kube-proxy-qqdfx" [896fd93b-917a-42b9-92db-283923830743] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1126 20:20:37.126166  248170 system_pods.go:89] "kube-scheduler-old-k8s-version-157431" [54a1cc6d-cdf5-4041-aa7e-7edda8f78380] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1126 20:20:37.126174  248170 system_pods.go:89] "storage-provisioner" [f6d6f6e0-74c6-4708-abff-c18f6962424e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 20:20:37.126187  248170 system_pods.go:126] duration metric: took 3.281761ms to wait for k8s-apps to be running ...
	I1126 20:20:37.126199  248170 system_svc.go:44] waiting for kubelet service to be running ....
	I1126 20:20:37.126240  248170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:20:37.139851  248170 system_svc.go:56] duration metric: took 13.647733ms WaitForService to wait for kubelet
	I1126 20:20:37.139878  248170 kubeadm.go:587] duration metric: took 3.350740739s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1126 20:20:37.139897  248170 node_conditions.go:102] verifying NodePressure condition ...
	I1126 20:20:37.142153  248170 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1126 20:20:37.142172  248170 node_conditions.go:123] node cpu capacity is 8
	I1126 20:20:37.142186  248170 node_conditions.go:105] duration metric: took 2.27842ms to run NodePressure ...
	I1126 20:20:37.142197  248170 start.go:242] waiting for startup goroutines ...
	I1126 20:20:37.142206  248170 start.go:247] waiting for cluster config update ...
	I1126 20:20:37.142215  248170 start.go:256] writing updated cluster config ...
	I1126 20:20:37.142443  248170 ssh_runner.go:195] Run: rm -f paused
	I1126 20:20:37.146374  248170 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 20:20:37.150624  248170 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-jhrhx" in "kube-system" namespace to be "Ready" or be gone ...
	W1126 20:20:39.156129  248170 pod_ready.go:104] pod "coredns-5dd5756b68-jhrhx" is not "Ready", error: node "old-k8s-version-157431" hosting pod "coredns-5dd5756b68-jhrhx" is not "Ready" (will retry)
	W1126 20:20:41.655352  248170 pod_ready.go:104] pod "coredns-5dd5756b68-jhrhx" is not "Ready", error: node "old-k8s-version-157431" hosting pod "coredns-5dd5756b68-jhrhx" is not "Ready" (will retry)
	I1126 20:20:40.248146  216504 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1126 20:20:41.241494  211567 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	W1126 20:20:44.155289  248170 pod_ready.go:104] pod "coredns-5dd5756b68-jhrhx" is not "Ready", error: node "old-k8s-version-157431" hosting pod "coredns-5dd5756b68-jhrhx" is not "Ready" (will retry)
	I1126 20:20:46.655804  248170 pod_ready.go:94] pod "coredns-5dd5756b68-jhrhx" is "Ready"
	I1126 20:20:46.655828  248170 pod_ready.go:86] duration metric: took 9.50518735s for pod "coredns-5dd5756b68-jhrhx" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:20:46.658492  248170 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-157431" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:20:46.662019  248170 pod_ready.go:94] pod "etcd-old-k8s-version-157431" is "Ready"
	I1126 20:20:46.662035  248170 pod_ready.go:86] duration metric: took 3.526002ms for pod "etcd-old-k8s-version-157431" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:20:46.664348  248170 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-157431" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:20:45.249553  216504 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1126 20:20:45.249625  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:20:45.249694  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:20:45.283521  216504 cri.go:89] found id: "a691af7ac3ed29c9fd271ce5327631b6b984af6e8860263f5f63b06c02387431"
	I1126 20:20:45.283539  216504 cri.go:89] found id: "904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823"
	I1126 20:20:45.283550  216504 cri.go:89] found id: ""
	I1126 20:20:45.283560  216504 logs.go:282] 2 containers: [a691af7ac3ed29c9fd271ce5327631b6b984af6e8860263f5f63b06c02387431 904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823]
	I1126 20:20:45.283612  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:45.287093  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:45.290453  216504 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:20:45.290510  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:20:45.322500  216504 cri.go:89] found id: "e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280"
	I1126 20:20:45.322520  216504 cri.go:89] found id: ""
	I1126 20:20:45.322529  216504 logs.go:282] 1 containers: [e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280]
	I1126 20:20:45.322564  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:45.326000  216504 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:20:45.326054  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:20:45.358652  216504 cri.go:89] found id: ""
	I1126 20:20:45.358676  216504 logs.go:282] 0 containers: []
	W1126 20:20:45.358686  216504 logs.go:284] No container was found matching "coredns"
	I1126 20:20:45.358693  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:20:45.358732  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:20:45.391304  216504 cri.go:89] found id: "b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915"
	I1126 20:20:45.391323  216504 cri.go:89] found id: ""
	I1126 20:20:45.391329  216504 logs.go:282] 1 containers: [b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915]
	I1126 20:20:45.391369  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:45.394901  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:20:45.394961  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:20:45.426890  216504 cri.go:89] found id: ""
	I1126 20:20:45.426912  216504 logs.go:282] 0 containers: []
	W1126 20:20:45.426921  216504 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:20:45.426927  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:20:45.426974  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:20:45.459132  216504 cri.go:89] found id: "583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e"
	I1126 20:20:45.459154  216504 cri.go:89] found id: ""
	I1126 20:20:45.459165  216504 logs.go:282] 1 containers: [583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e]
	I1126 20:20:45.459206  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:45.462602  216504 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:20:45.462650  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:20:45.494221  216504 cri.go:89] found id: ""
	I1126 20:20:45.494240  216504 logs.go:282] 0 containers: []
	W1126 20:20:45.494247  216504 logs.go:284] No container was found matching "kindnet"
	I1126 20:20:45.494252  216504 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:20:45.494294  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:20:45.528363  216504 cri.go:89] found id: ""
	I1126 20:20:45.528384  216504 logs.go:282] 0 containers: []
	W1126 20:20:45.528390  216504 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:20:45.528402  216504 logs.go:123] Gathering logs for dmesg ...
	I1126 20:20:45.528412  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:20:45.543065  216504 logs.go:123] Gathering logs for kube-apiserver [a691af7ac3ed29c9fd271ce5327631b6b984af6e8860263f5f63b06c02387431] ...
	I1126 20:20:45.543086  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a691af7ac3ed29c9fd271ce5327631b6b984af6e8860263f5f63b06c02387431"
	I1126 20:20:45.577358  216504 logs.go:123] Gathering logs for etcd [e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280] ...
	I1126 20:20:45.577383  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280"
	I1126 20:20:45.608558  216504 logs.go:123] Gathering logs for kube-scheduler [b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915] ...
	I1126 20:20:45.608584  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915"
	I1126 20:20:45.679538  216504 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:20:45.679561  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:20:45.723806  216504 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:20:45.723830  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1126 20:20:46.241904  211567 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1126 20:20:46.241954  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:20:46.242008  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:20:46.268289  211567 cri.go:89] found id: "30d851bfffd1c5f2226fc9246c022d3359bd9a3092ddd6e984b665101217c12c"
	I1126 20:20:46.268310  211567 cri.go:89] found id: "b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf"
	I1126 20:20:46.268316  211567 cri.go:89] found id: ""
	I1126 20:20:46.268323  211567 logs.go:282] 2 containers: [30d851bfffd1c5f2226fc9246c022d3359bd9a3092ddd6e984b665101217c12c b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf]
	I1126 20:20:46.268374  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:20:46.272164  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:20:46.275972  211567 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:20:46.276029  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:20:46.302263  211567 cri.go:89] found id: ""
	I1126 20:20:46.302284  211567 logs.go:282] 0 containers: []
	W1126 20:20:46.302290  211567 logs.go:284] No container was found matching "etcd"
	I1126 20:20:46.302296  211567 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:20:46.302333  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:20:46.327276  211567 cri.go:89] found id: ""
	I1126 20:20:46.327294  211567 logs.go:282] 0 containers: []
	W1126 20:20:46.327301  211567 logs.go:284] No container was found matching "coredns"
	I1126 20:20:46.327307  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:20:46.327343  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:20:46.351875  211567 cri.go:89] found id: "b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3"
	I1126 20:20:46.351898  211567 cri.go:89] found id: ""
	I1126 20:20:46.351906  211567 logs.go:282] 1 containers: [b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3]
	I1126 20:20:46.351946  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:20:46.355565  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:20:46.355610  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:20:46.379609  211567 cri.go:89] found id: ""
	I1126 20:20:46.379634  211567 logs.go:282] 0 containers: []
	W1126 20:20:46.379643  211567 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:20:46.379650  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:20:46.379688  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:20:46.403904  211567 cri.go:89] found id: "cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e"
	I1126 20:20:46.403924  211567 cri.go:89] found id: ""
	I1126 20:20:46.403931  211567 logs.go:282] 1 containers: [cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e]
	I1126 20:20:46.403971  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:20:46.407585  211567 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:20:46.407636  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:20:46.433109  211567 cri.go:89] found id: ""
	I1126 20:20:46.433127  211567 logs.go:282] 0 containers: []
	W1126 20:20:46.433133  211567 logs.go:284] No container was found matching "kindnet"
	I1126 20:20:46.433138  211567 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:20:46.433174  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:20:46.457416  211567 cri.go:89] found id: ""
	I1126 20:20:46.457435  211567 logs.go:282] 0 containers: []
	W1126 20:20:46.457441  211567 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:20:46.457469  211567 logs.go:123] Gathering logs for kube-scheduler [b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3] ...
	I1126 20:20:46.457482  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3"
	I1126 20:20:46.505502  211567 logs.go:123] Gathering logs for kube-controller-manager [cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e] ...
	I1126 20:20:46.505527  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e"
	I1126 20:20:46.530301  211567 logs.go:123] Gathering logs for container status ...
	I1126 20:20:46.530323  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:20:46.558232  211567 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:20:46.558254  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:20:48.669420  248170 pod_ready.go:104] pod "kube-apiserver-old-k8s-version-157431" is not "Ready", error: <nil>
	I1126 20:20:49.168771  248170 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-157431" is "Ready"
	I1126 20:20:49.168799  248170 pod_ready.go:86] duration metric: took 2.504432021s for pod "kube-apiserver-old-k8s-version-157431" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:20:49.171171  248170 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-157431" in "kube-system" namespace to be "Ready" or be gone ...
	W1126 20:20:51.177583  248170 pod_ready.go:104] pod "kube-controller-manager-old-k8s-version-157431" is not "Ready", error: <nil>
	I1126 20:20:52.178079  248170 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-157431" is "Ready"
	I1126 20:20:52.178109  248170 pod_ready.go:86] duration metric: took 3.006914887s for pod "kube-controller-manager-old-k8s-version-157431" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:20:52.181483  248170 pod_ready.go:83] waiting for pod "kube-proxy-qqdfx" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:20:52.186183  248170 pod_ready.go:94] pod "kube-proxy-qqdfx" is "Ready"
	I1126 20:20:52.186215  248170 pod_ready.go:86] duration metric: took 4.704469ms for pod "kube-proxy-qqdfx" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:20:52.189303  248170 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-157431" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:20:52.454980  248170 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-157431" is "Ready"
	I1126 20:20:52.455011  248170 pod_ready.go:86] duration metric: took 265.676811ms for pod "kube-scheduler-old-k8s-version-157431" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:20:52.455026  248170 pod_ready.go:40] duration metric: took 15.308615482s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 20:20:52.512132  248170 start.go:625] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1126 20:20:52.515110  248170 out.go:203] 
	W1126 20:20:52.516291  248170 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1126 20:20:52.517564  248170 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1126 20:20:52.518897  248170 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-157431" cluster and "default" namespace by default
	I1126 20:20:55.781512  216504 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.057662128s)
	W1126 20:20:55.781549  216504 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1126 20:20:55.781556  216504 logs.go:123] Gathering logs for kube-apiserver [904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823] ...
	I1126 20:20:55.781568  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823"
	I1126 20:20:55.818257  216504 logs.go:123] Gathering logs for kube-controller-manager [583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e] ...
	I1126 20:20:55.818282  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e"
	I1126 20:20:55.853882  216504 logs.go:123] Gathering logs for container status ...
	I1126 20:20:55.853916  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:20:55.890893  216504 logs.go:123] Gathering logs for kubelet ...
	I1126 20:20:55.890923  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:20:56.612804  211567 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.054527456s)
	W1126 20:20:56.612844  211567 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1126 20:20:56.612856  211567 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:20:56.612869  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:20:56.658594  211567 logs.go:123] Gathering logs for kubelet ...
	I1126 20:20:56.658624  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:20:56.745352  211567 logs.go:123] Gathering logs for dmesg ...
	I1126 20:20:56.745376  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:20:56.758661  211567 logs.go:123] Gathering logs for kube-apiserver [30d851bfffd1c5f2226fc9246c022d3359bd9a3092ddd6e984b665101217c12c] ...
	I1126 20:20:56.758684  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 30d851bfffd1c5f2226fc9246c022d3359bd9a3092ddd6e984b665101217c12c"
	I1126 20:20:56.789078  211567 logs.go:123] Gathering logs for kube-apiserver [b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf] ...
	I1126 20:20:56.789101  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf"
	I1126 20:20:59.319951  211567 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1126 20:21:00.107208  211567 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": read tcp 192.168.103.1:49868->192.168.103.2:8443: read: connection reset by peer
	I1126 20:21:00.107262  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:21:00.107307  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:21:00.133835  211567 cri.go:89] found id: "30d851bfffd1c5f2226fc9246c022d3359bd9a3092ddd6e984b665101217c12c"
	I1126 20:21:00.133854  211567 cri.go:89] found id: "b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf"
	I1126 20:21:00.133859  211567 cri.go:89] found id: ""
	I1126 20:21:00.133866  211567 logs.go:282] 2 containers: [30d851bfffd1c5f2226fc9246c022d3359bd9a3092ddd6e984b665101217c12c b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf]
	I1126 20:21:00.133910  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:21:00.137743  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:21:00.141254  211567 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:21:00.141307  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:21:00.166926  211567 cri.go:89] found id: ""
	I1126 20:21:00.166947  211567 logs.go:282] 0 containers: []
	W1126 20:21:00.166956  211567 logs.go:284] No container was found matching "etcd"
	I1126 20:21:00.166963  211567 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:21:00.167014  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:21:00.193410  211567 cri.go:89] found id: ""
	I1126 20:21:00.193435  211567 logs.go:282] 0 containers: []
	W1126 20:21:00.193443  211567 logs.go:284] No container was found matching "coredns"
	I1126 20:21:00.193451  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:21:00.193513  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:21:00.219254  211567 cri.go:89] found id: "b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3"
	I1126 20:21:00.219280  211567 cri.go:89] found id: ""
	I1126 20:21:00.219290  211567 logs.go:282] 1 containers: [b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3]
	I1126 20:21:00.219334  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:21:00.223080  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:21:00.223148  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:21:00.248004  211567 cri.go:89] found id: ""
	I1126 20:21:00.248028  211567 logs.go:282] 0 containers: []
	W1126 20:21:00.248042  211567 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:21:00.248049  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:21:00.248098  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:21:00.273571  211567 cri.go:89] found id: "43982b98b5aaa319f602167886267f7bf3f21c726ad09aab92936f4816a878f5"
	I1126 20:21:00.273594  211567 cri.go:89] found id: "cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e"
	I1126 20:21:00.273598  211567 cri.go:89] found id: ""
	I1126 20:21:00.273606  211567 logs.go:282] 2 containers: [43982b98b5aaa319f602167886267f7bf3f21c726ad09aab92936f4816a878f5 cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e]
	I1126 20:21:00.273648  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:21:00.277454  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:21:00.280911  211567 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:21:00.280966  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:21:00.306789  211567 cri.go:89] found id: ""
	I1126 20:21:00.306816  211567 logs.go:282] 0 containers: []
	W1126 20:21:00.306825  211567 logs.go:284] No container was found matching "kindnet"
	I1126 20:21:00.306833  211567 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:21:00.306885  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:21:00.333309  211567 cri.go:89] found id: ""
	I1126 20:21:00.333335  211567 logs.go:282] 0 containers: []
	W1126 20:21:00.333344  211567 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:21:00.333360  211567 logs.go:123] Gathering logs for kubelet ...
	I1126 20:21:00.333372  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:21:00.418949  211567 logs.go:123] Gathering logs for dmesg ...
	I1126 20:21:00.418976  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:21:00.433102  211567 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:21:00.433131  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:21:00.486267  211567 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:21:00.486286  211567 logs.go:123] Gathering logs for kube-apiserver [30d851bfffd1c5f2226fc9246c022d3359bd9a3092ddd6e984b665101217c12c] ...
	I1126 20:21:00.486295  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 30d851bfffd1c5f2226fc9246c022d3359bd9a3092ddd6e984b665101217c12c"
	I1126 20:21:00.515988  211567 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:21:00.516010  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:21:00.565124  211567 logs.go:123] Gathering logs for kube-apiserver [b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf] ...
	I1126 20:21:00.565146  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf"
	W1126 20:21:00.589615  211567 logs.go:130] failed kube-apiserver [b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:21:00.587835    6073 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf\": container with ID starting with b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf not found: ID does not exist" containerID="b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf"
	time="2025-11-26T20:21:00Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf\": container with ID starting with b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf not found: ID does not exist"
	 output: 
	** stderr ** 
	E1126 20:21:00.587835    6073 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf\": container with ID starting with b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf not found: ID does not exist" containerID="b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf"
	time="2025-11-26T20:21:00Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf\": container with ID starting with b1e2ec1bbe04937e5f1498d891fe37212d2e096cd91a79ea614a7eba88b07bdf not found: ID does not exist"
	
	** /stderr **
	I1126 20:21:00.589642  211567 logs.go:123] Gathering logs for kube-scheduler [b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3] ...
	I1126 20:21:00.589657  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3"
	I1126 20:21:00.638883  211567 logs.go:123] Gathering logs for kube-controller-manager [43982b98b5aaa319f602167886267f7bf3f21c726ad09aab92936f4816a878f5] ...
	I1126 20:21:00.638909  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 43982b98b5aaa319f602167886267f7bf3f21c726ad09aab92936f4816a878f5"
	I1126 20:21:00.663348  211567 logs.go:123] Gathering logs for kube-controller-manager [cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e] ...
	I1126 20:21:00.663369  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e"
	I1126 20:21:00.689150  211567 logs.go:123] Gathering logs for container status ...
	I1126 20:21:00.689174  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:20:58.481865  216504 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1126 20:20:58.482284  216504 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1126 20:20:58.482341  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:20:58.482402  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:20:58.515489  216504 cri.go:89] found id: "a691af7ac3ed29c9fd271ce5327631b6b984af6e8860263f5f63b06c02387431"
	I1126 20:20:58.515512  216504 cri.go:89] found id: "904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823"
	I1126 20:20:58.515518  216504 cri.go:89] found id: ""
	I1126 20:20:58.515528  216504 logs.go:282] 2 containers: [a691af7ac3ed29c9fd271ce5327631b6b984af6e8860263f5f63b06c02387431 904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823]
	I1126 20:20:58.515594  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:58.519256  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:58.522995  216504 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:20:58.523056  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:20:58.554588  216504 cri.go:89] found id: "e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280"
	I1126 20:20:58.554605  216504 cri.go:89] found id: ""
	I1126 20:20:58.554614  216504 logs.go:282] 1 containers: [e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280]
	I1126 20:20:58.554666  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:58.558013  216504 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:20:58.558064  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:20:58.590485  216504 cri.go:89] found id: ""
	I1126 20:20:58.590507  216504 logs.go:282] 0 containers: []
	W1126 20:20:58.590515  216504 logs.go:284] No container was found matching "coredns"
	I1126 20:20:58.590520  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:20:58.590564  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:20:58.622428  216504 cri.go:89] found id: "b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915"
	I1126 20:20:58.622448  216504 cri.go:89] found id: ""
	I1126 20:20:58.622483  216504 logs.go:282] 1 containers: [b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915]
	I1126 20:20:58.622536  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:58.625879  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:20:58.625937  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:20:58.657015  216504 cri.go:89] found id: ""
	I1126 20:20:58.657039  216504 logs.go:282] 0 containers: []
	W1126 20:20:58.657048  216504 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:20:58.657055  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:20:58.657098  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:20:58.689215  216504 cri.go:89] found id: "4fb35e453473ab894694b4123b0caaba84eb921f83bf9f12a54289a43ef5b354"
	I1126 20:20:58.689235  216504 cri.go:89] found id: "583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e"
	I1126 20:20:58.689241  216504 cri.go:89] found id: ""
	I1126 20:20:58.689250  216504 logs.go:282] 2 containers: [4fb35e453473ab894694b4123b0caaba84eb921f83bf9f12a54289a43ef5b354 583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e]
	I1126 20:20:58.689301  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:58.692709  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:20:58.695923  216504 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:20:58.695967  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:20:58.727730  216504 cri.go:89] found id: ""
	I1126 20:20:58.727751  216504 logs.go:282] 0 containers: []
	W1126 20:20:58.727761  216504 logs.go:284] No container was found matching "kindnet"
	I1126 20:20:58.727766  216504 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:20:58.727813  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:20:58.760590  216504 cri.go:89] found id: ""
	I1126 20:20:58.760614  216504 logs.go:282] 0 containers: []
	W1126 20:20:58.760624  216504 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:20:58.760635  216504 logs.go:123] Gathering logs for kubelet ...
	I1126 20:20:58.760649  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:20:58.849907  216504 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:20:58.849931  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:20:58.907806  216504 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:20:58.907824  216504 logs.go:123] Gathering logs for kube-apiserver [a691af7ac3ed29c9fd271ce5327631b6b984af6e8860263f5f63b06c02387431] ...
	I1126 20:20:58.907835  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a691af7ac3ed29c9fd271ce5327631b6b984af6e8860263f5f63b06c02387431"
	I1126 20:20:58.945284  216504 logs.go:123] Gathering logs for etcd [e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280] ...
	I1126 20:20:58.945312  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280"
	I1126 20:20:58.978413  216504 logs.go:123] Gathering logs for kube-scheduler [b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915] ...
	I1126 20:20:58.978439  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915"
	I1126 20:20:59.048200  216504 logs.go:123] Gathering logs for kube-controller-manager [583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e] ...
	I1126 20:20:59.048230  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e"
	I1126 20:20:59.080748  216504 logs.go:123] Gathering logs for container status ...
	I1126 20:20:59.080771  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:20:59.119627  216504 logs.go:123] Gathering logs for dmesg ...
	I1126 20:20:59.119653  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:20:59.134100  216504 logs.go:123] Gathering logs for kube-apiserver [904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823] ...
	I1126 20:20:59.134122  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823"
	W1126 20:20:59.166399  216504 logs.go:130] failed kube-apiserver [904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:20:59.164187    6589 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823\": container with ID starting with 904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823 not found: ID does not exist" containerID="904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823"
	time="2025-11-26T20:20:59Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823\": container with ID starting with 904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823 not found: ID does not exist"
	 output: 
	** stderr ** 
	E1126 20:20:59.164187    6589 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823\": container with ID starting with 904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823 not found: ID does not exist" containerID="904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823"
	time="2025-11-26T20:20:59Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823\": container with ID starting with 904907a2641abdec8e34fa02236cf1ce091e0fca20c3a24473aad5e51333f823 not found: ID does not exist"
	
	** /stderr **
	I1126 20:20:59.166421  216504 logs.go:123] Gathering logs for kube-controller-manager [4fb35e453473ab894694b4123b0caaba84eb921f83bf9f12a54289a43ef5b354] ...
	I1126 20:20:59.166432  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4fb35e453473ab894694b4123b0caaba84eb921f83bf9f12a54289a43ef5b354"
	I1126 20:20:59.198370  216504 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:20:59.198395  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:21:01.744507  216504 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1126 20:21:01.744893  216504 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1126 20:21:01.744945  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:21:01.745002  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:21:01.779949  216504 cri.go:89] found id: "a691af7ac3ed29c9fd271ce5327631b6b984af6e8860263f5f63b06c02387431"
	I1126 20:21:01.779966  216504 cri.go:89] found id: ""
	I1126 20:21:01.779974  216504 logs.go:282] 1 containers: [a691af7ac3ed29c9fd271ce5327631b6b984af6e8860263f5f63b06c02387431]
	I1126 20:21:01.780026  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:21:01.783582  216504 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:21:01.783640  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:21:01.816786  216504 cri.go:89] found id: "e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280"
	I1126 20:21:01.816802  216504 cri.go:89] found id: ""
	I1126 20:21:01.816810  216504 logs.go:282] 1 containers: [e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280]
	I1126 20:21:01.816856  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:21:01.820211  216504 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:21:01.820266  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:21:01.853845  216504 cri.go:89] found id: ""
	I1126 20:21:01.853870  216504 logs.go:282] 0 containers: []
	W1126 20:21:01.853876  216504 logs.go:284] No container was found matching "coredns"
	I1126 20:21:01.853882  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:21:01.853935  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:21:01.886072  216504 cri.go:89] found id: "b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915"
	I1126 20:21:01.886088  216504 cri.go:89] found id: ""
	I1126 20:21:01.886095  216504 logs.go:282] 1 containers: [b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915]
	I1126 20:21:01.886147  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:21:01.889487  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:21:01.889540  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:21:01.921561  216504 cri.go:89] found id: ""
	I1126 20:21:01.921580  216504 logs.go:282] 0 containers: []
	W1126 20:21:01.921587  216504 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:21:01.921593  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:21:01.921630  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:21:01.955564  216504 cri.go:89] found id: "4fb35e453473ab894694b4123b0caaba84eb921f83bf9f12a54289a43ef5b354"
	I1126 20:21:01.955584  216504 cri.go:89] found id: "583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e"
	I1126 20:21:01.955590  216504 cri.go:89] found id: ""
	I1126 20:21:01.955598  216504 logs.go:282] 2 containers: [4fb35e453473ab894694b4123b0caaba84eb921f83bf9f12a54289a43ef5b354 583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e]
	I1126 20:21:01.955652  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:21:01.959137  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:21:01.962442  216504 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:21:01.962504  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:21:01.994064  216504 cri.go:89] found id: ""
	I1126 20:21:01.994084  216504 logs.go:282] 0 containers: []
	W1126 20:21:01.994093  216504 logs.go:284] No container was found matching "kindnet"
	I1126 20:21:01.994099  216504 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:21:01.994146  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:21:02.026601  216504 cri.go:89] found id: ""
	I1126 20:21:02.026626  216504 logs.go:282] 0 containers: []
	W1126 20:21:02.026635  216504 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:21:02.026652  216504 logs.go:123] Gathering logs for dmesg ...
	I1126 20:21:02.026669  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:21:02.041107  216504 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:21:02.041128  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:21:02.098164  216504 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:21:02.098185  216504 logs.go:123] Gathering logs for etcd [e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280] ...
	I1126 20:21:02.098199  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280"
	I1126 20:21:02.130561  216504 logs.go:123] Gathering logs for kube-controller-manager [4fb35e453473ab894694b4123b0caaba84eb921f83bf9f12a54289a43ef5b354] ...
	I1126 20:21:02.130587  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4fb35e453473ab894694b4123b0caaba84eb921f83bf9f12a54289a43ef5b354"
	I1126 20:21:02.162989  216504 logs.go:123] Gathering logs for container status ...
	I1126 20:21:02.163023  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:21:02.199056  216504 logs.go:123] Gathering logs for kube-apiserver [a691af7ac3ed29c9fd271ce5327631b6b984af6e8860263f5f63b06c02387431] ...
	I1126 20:21:02.199082  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a691af7ac3ed29c9fd271ce5327631b6b984af6e8860263f5f63b06c02387431"
	I1126 20:21:02.234644  216504 logs.go:123] Gathering logs for kube-scheduler [b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915] ...
	I1126 20:21:02.234670  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915"
	I1126 20:21:02.310868  216504 logs.go:123] Gathering logs for kube-controller-manager [583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e] ...
	I1126 20:21:02.310894  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e"
	I1126 20:21:02.344534  216504 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:21:02.344558  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:21:02.389597  216504 logs.go:123] Gathering logs for kubelet ...
	I1126 20:21:02.389621  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:21:03.219973  211567 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1126 20:21:03.220340  211567 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1126 20:21:03.220391  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:21:03.220441  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:21:03.246482  211567 cri.go:89] found id: "30d851bfffd1c5f2226fc9246c022d3359bd9a3092ddd6e984b665101217c12c"
	I1126 20:21:03.246501  211567 cri.go:89] found id: ""
	I1126 20:21:03.246510  211567 logs.go:282] 1 containers: [30d851bfffd1c5f2226fc9246c022d3359bd9a3092ddd6e984b665101217c12c]
	I1126 20:21:03.246563  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:21:03.250137  211567 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:21:03.250185  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:21:03.275762  211567 cri.go:89] found id: ""
	I1126 20:21:03.275789  211567 logs.go:282] 0 containers: []
	W1126 20:21:03.275797  211567 logs.go:284] No container was found matching "etcd"
	I1126 20:21:03.275803  211567 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:21:03.275865  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:21:03.300254  211567 cri.go:89] found id: ""
	I1126 20:21:03.300277  211567 logs.go:282] 0 containers: []
	W1126 20:21:03.300286  211567 logs.go:284] No container was found matching "coredns"
	I1126 20:21:03.300293  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:21:03.300333  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:21:03.324707  211567 cri.go:89] found id: "b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3"
	I1126 20:21:03.324728  211567 cri.go:89] found id: ""
	I1126 20:21:03.324738  211567 logs.go:282] 1 containers: [b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3]
	I1126 20:21:03.324786  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:21:03.328172  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:21:03.328228  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:21:03.352636  211567 cri.go:89] found id: ""
	I1126 20:21:03.352656  211567 logs.go:282] 0 containers: []
	W1126 20:21:03.352665  211567 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:21:03.352671  211567 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:21:03.352721  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:21:03.377985  211567 cri.go:89] found id: "43982b98b5aaa319f602167886267f7bf3f21c726ad09aab92936f4816a878f5"
	I1126 20:21:03.378005  211567 cri.go:89] found id: "cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e"
	I1126 20:21:03.378010  211567 cri.go:89] found id: ""
	I1126 20:21:03.378018  211567 logs.go:282] 2 containers: [43982b98b5aaa319f602167886267f7bf3f21c726ad09aab92936f4816a878f5 cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e]
	I1126 20:21:03.378066  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:21:03.381640  211567 ssh_runner.go:195] Run: which crictl
	I1126 20:21:03.384940  211567 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:21:03.384988  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:21:03.409114  211567 cri.go:89] found id: ""
	I1126 20:21:03.409135  211567 logs.go:282] 0 containers: []
	W1126 20:21:03.409143  211567 logs.go:284] No container was found matching "kindnet"
	I1126 20:21:03.409150  211567 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:21:03.409198  211567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:21:03.433124  211567 cri.go:89] found id: ""
	I1126 20:21:03.433143  211567 logs.go:282] 0 containers: []
	W1126 20:21:03.433148  211567 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:21:03.433164  211567 logs.go:123] Gathering logs for kubelet ...
	I1126 20:21:03.433175  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:21:03.518659  211567 logs.go:123] Gathering logs for dmesg ...
	I1126 20:21:03.518688  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:21:03.532126  211567 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:21:03.532151  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:21:03.584472  211567 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:21:03.584490  211567 logs.go:123] Gathering logs for kube-controller-manager [43982b98b5aaa319f602167886267f7bf3f21c726ad09aab92936f4816a878f5] ...
	I1126 20:21:03.584504  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 43982b98b5aaa319f602167886267f7bf3f21c726ad09aab92936f4816a878f5"
	I1126 20:21:03.608998  211567 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:21:03.609021  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:21:03.654905  211567 logs.go:123] Gathering logs for container status ...
	I1126 20:21:03.654929  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:21:03.683141  211567 logs.go:123] Gathering logs for kube-apiserver [30d851bfffd1c5f2226fc9246c022d3359bd9a3092ddd6e984b665101217c12c] ...
	I1126 20:21:03.683162  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 30d851bfffd1c5f2226fc9246c022d3359bd9a3092ddd6e984b665101217c12c"
	I1126 20:21:03.714240  211567 logs.go:123] Gathering logs for kube-scheduler [b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3] ...
	I1126 20:21:03.714263  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b3963c7b3e0d5b6ce97fc866eedd921b82c3d4e7f3a0fa9102bdbbbfc85ce0e3"
	I1126 20:21:03.765079  211567 logs.go:123] Gathering logs for kube-controller-manager [cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e] ...
	I1126 20:21:03.765103  211567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cb7c89dbe28e3a46273146faf38b88ea982e78bc3d793b93d41bd57cb1bf2c3e"
	I1126 20:21:04.981525  216504 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1126 20:21:04.981898  216504 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1126 20:21:04.981946  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:21:04.981990  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:21:05.015649  216504 cri.go:89] found id: "a691af7ac3ed29c9fd271ce5327631b6b984af6e8860263f5f63b06c02387431"
	I1126 20:21:05.015668  216504 cri.go:89] found id: ""
	I1126 20:21:05.015677  216504 logs.go:282] 1 containers: [a691af7ac3ed29c9fd271ce5327631b6b984af6e8860263f5f63b06c02387431]
	I1126 20:21:05.015730  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:21:05.019318  216504 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:21:05.019367  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:21:05.052244  216504 cri.go:89] found id: "e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280"
	I1126 20:21:05.052266  216504 cri.go:89] found id: ""
	I1126 20:21:05.052274  216504 logs.go:282] 1 containers: [e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280]
	I1126 20:21:05.052315  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:21:05.055762  216504 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:21:05.055828  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:21:05.090611  216504 cri.go:89] found id: ""
	I1126 20:21:05.090636  216504 logs.go:282] 0 containers: []
	W1126 20:21:05.090646  216504 logs.go:284] No container was found matching "coredns"
	I1126 20:21:05.090654  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:21:05.090707  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:21:05.128200  216504 cri.go:89] found id: "b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915"
	I1126 20:21:05.128219  216504 cri.go:89] found id: ""
	I1126 20:21:05.128227  216504 logs.go:282] 1 containers: [b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915]
	I1126 20:21:05.128275  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:21:05.132087  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:21:05.132157  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:21:05.167131  216504 cri.go:89] found id: ""
	I1126 20:21:05.167154  216504 logs.go:282] 0 containers: []
	W1126 20:21:05.167161  216504 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:21:05.167166  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:21:05.167211  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:21:05.200001  216504 cri.go:89] found id: "4fb35e453473ab894694b4123b0caaba84eb921f83bf9f12a54289a43ef5b354"
	I1126 20:21:05.200023  216504 cri.go:89] found id: "583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e"
	I1126 20:21:05.200027  216504 cri.go:89] found id: ""
	I1126 20:21:05.200035  216504 logs.go:282] 2 containers: [4fb35e453473ab894694b4123b0caaba84eb921f83bf9f12a54289a43ef5b354 583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e]
	I1126 20:21:05.200086  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:21:05.203613  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:21:05.206742  216504 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:21:05.206789  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:21:05.239192  216504 cri.go:89] found id: ""
	I1126 20:21:05.239214  216504 logs.go:282] 0 containers: []
	W1126 20:21:05.239224  216504 logs.go:284] No container was found matching "kindnet"
	I1126 20:21:05.239232  216504 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:21:05.239283  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:21:05.272318  216504 cri.go:89] found id: ""
	I1126 20:21:05.272337  216504 logs.go:282] 0 containers: []
	W1126 20:21:05.272344  216504 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:21:05.272363  216504 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:21:05.272377  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:21:05.330677  216504 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:21:05.330694  216504 logs.go:123] Gathering logs for kube-apiserver [a691af7ac3ed29c9fd271ce5327631b6b984af6e8860263f5f63b06c02387431] ...
	I1126 20:21:05.330705  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a691af7ac3ed29c9fd271ce5327631b6b984af6e8860263f5f63b06c02387431"
	I1126 20:21:05.366965  216504 logs.go:123] Gathering logs for etcd [e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280] ...
	I1126 20:21:05.366988  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280"
	I1126 20:21:05.399368  216504 logs.go:123] Gathering logs for kube-scheduler [b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915] ...
	I1126 20:21:05.399400  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915"
	I1126 20:21:05.471958  216504 logs.go:123] Gathering logs for kube-controller-manager [583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e] ...
	I1126 20:21:05.471987  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 583fa851d9181ae71a9fa5fce1b1b06a79338fa18ac20779d8daf5e446e17d4e"
	I1126 20:21:05.504880  216504 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:21:05.504906  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:21:05.552680  216504 logs.go:123] Gathering logs for container status ...
	I1126 20:21:05.552703  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:21:05.595554  216504 logs.go:123] Gathering logs for kubelet ...
	I1126 20:21:05.595581  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:21:05.688170  216504 logs.go:123] Gathering logs for dmesg ...
	I1126 20:21:05.688196  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:21:05.705734  216504 logs.go:123] Gathering logs for kube-controller-manager [4fb35e453473ab894694b4123b0caaba84eb921f83bf9f12a54289a43ef5b354] ...
	I1126 20:21:05.705765  216504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4fb35e453473ab894694b4123b0caaba84eb921f83bf9f12a54289a43ef5b354"
	I1126 20:21:08.246469  216504 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1126 20:21:08.246814  216504 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1126 20:21:08.246867  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:21:08.246914  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:21:08.282484  216504 cri.go:89] found id: "a691af7ac3ed29c9fd271ce5327631b6b984af6e8860263f5f63b06c02387431"
	I1126 20:21:08.282505  216504 cri.go:89] found id: ""
	I1126 20:21:08.282512  216504 logs.go:282] 1 containers: [a691af7ac3ed29c9fd271ce5327631b6b984af6e8860263f5f63b06c02387431]
	I1126 20:21:08.282556  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:21:08.286260  216504 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:21:08.286312  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:21:08.318906  216504 cri.go:89] found id: "e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280"
	I1126 20:21:08.318925  216504 cri.go:89] found id: ""
	I1126 20:21:08.318933  216504 logs.go:282] 1 containers: [e183e3c91c0a77c0e29a0e90150e930355eb70854746f833f74ffde7abb2b280]
	I1126 20:21:08.318975  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:21:08.322312  216504 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:21:08.322360  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:21:08.356620  216504 cri.go:89] found id: ""
	I1126 20:21:08.356646  216504 logs.go:282] 0 containers: []
	W1126 20:21:08.356656  216504 logs.go:284] No container was found matching "coredns"
	I1126 20:21:08.356663  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:21:08.356724  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:21:08.395277  216504 cri.go:89] found id: "b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915"
	I1126 20:21:08.395304  216504 cri.go:89] found id: ""
	I1126 20:21:08.395314  216504 logs.go:282] 1 containers: [b8ece062c9a0e5492bc91dd93b150c3cbf1c5a0c4392a32117adf307c2241915]
	I1126 20:21:08.395371  216504 ssh_runner.go:195] Run: which crictl
	I1126 20:21:08.399257  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:21:08.399325  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:21:08.434974  216504 cri.go:89] found id: ""
	I1126 20:21:08.434996  216504 logs.go:282] 0 containers: []
	W1126 20:21:08.435006  216504 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:21:08.435013  216504 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:21:08.435060  216504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	
	
	==> CRI-O <==
	Nov 26 20:20:52 old-k8s-version-157431 crio[568]: time="2025-11-26T20:20:52.825201614Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/f238ce5579b9631d14d43fb7c6eca63d6e4841c169ba341fef637c933dae6182/merged/etc/group: no such file or directory"
	Nov 26 20:20:52 old-k8s-version-157431 crio[568]: time="2025-11-26T20:20:52.825639827Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:20:52 old-k8s-version-157431 crio[568]: time="2025-11-26T20:20:52.863843797Z" level=info msg="Created container d0ff3b383353d86e05f922635c797f096102d6ebe358f4b279634a688ecbe2fb: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-j28gs/kubernetes-dashboard" id=1840d4db-9ab8-4802-a1b7-21f3cf55fbfa name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:20:52 old-k8s-version-157431 crio[568]: time="2025-11-26T20:20:52.864507025Z" level=info msg="Starting container: d0ff3b383353d86e05f922635c797f096102d6ebe358f4b279634a688ecbe2fb" id=63f03e11-4981-4b95-bd74-c4f8f9a6ab11 name=/runtime.v1.RuntimeService/StartContainer
	Nov 26 20:20:52 old-k8s-version-157431 crio[568]: time="2025-11-26T20:20:52.866413087Z" level=info msg="Started container" PID=1528 containerID=d0ff3b383353d86e05f922635c797f096102d6ebe358f4b279634a688ecbe2fb description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-j28gs/kubernetes-dashboard id=63f03e11-4981-4b95-bd74-c4f8f9a6ab11 name=/runtime.v1.RuntimeService/StartContainer sandboxID=03ce806a37f7730413b18862cb23a8aa136b96e1985e43c29a4ff45bfa8d1a4f
	Nov 26 20:20:55 old-k8s-version-157431 crio[568]: time="2025-11-26T20:20:55.183650797Z" level=info msg="Pulled image: registry.k8s.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" id=9031b449-0145-417a-80d2-7d852a18fcaf name=/runtime.v1.ImageService/PullImage
	Nov 26 20:20:55 old-k8s-version-157431 crio[568]: time="2025-11-26T20:20:55.184349035Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=3f8898a0-2e03-4625-84ce-a77b4aa4ae76 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:20:55 old-k8s-version-157431 crio[568]: time="2025-11-26T20:20:55.187486419Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jqrrz/dashboard-metrics-scraper" id=3f140241-96b1-492a-90ca-b0f2e9b38d6d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:20:55 old-k8s-version-157431 crio[568]: time="2025-11-26T20:20:55.187602313Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:20:55 old-k8s-version-157431 crio[568]: time="2025-11-26T20:20:55.193739677Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:20:55 old-k8s-version-157431 crio[568]: time="2025-11-26T20:20:55.194206911Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:20:55 old-k8s-version-157431 crio[568]: time="2025-11-26T20:20:55.215038919Z" level=info msg="Created container e025c2e925b2866a2de52ab210cdbb69712e60e5b6acfdd84cf87abce99d23f0: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jqrrz/dashboard-metrics-scraper" id=3f140241-96b1-492a-90ca-b0f2e9b38d6d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:20:55 old-k8s-version-157431 crio[568]: time="2025-11-26T20:20:55.215535469Z" level=info msg="Starting container: e025c2e925b2866a2de52ab210cdbb69712e60e5b6acfdd84cf87abce99d23f0" id=02b0e6a2-f6aa-4382-9adf-b94158bb7367 name=/runtime.v1.RuntimeService/StartContainer
	Nov 26 20:20:55 old-k8s-version-157431 crio[568]: time="2025-11-26T20:20:55.217125533Z" level=info msg="Started container" PID=1755 containerID=e025c2e925b2866a2de52ab210cdbb69712e60e5b6acfdd84cf87abce99d23f0 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jqrrz/dashboard-metrics-scraper id=02b0e6a2-f6aa-4382-9adf-b94158bb7367 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d214e4e1535f0ed3867d04ef1dc64416c3bf57faf76db7250d67ba32eb41422a
	Nov 26 20:20:55 old-k8s-version-157431 crio[568]: time="2025-11-26T20:20:55.247125219Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=0363d334-a6ce-4449-85bf-722a2db67370 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:20:55 old-k8s-version-157431 crio[568]: time="2025-11-26T20:20:55.249654036Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=aedab969-88b3-475f-bdee-a70571b950bb name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:20:55 old-k8s-version-157431 crio[568]: time="2025-11-26T20:20:55.253724688Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jqrrz/dashboard-metrics-scraper" id=f5bfe5a1-b1ce-4e68-b731-b72f546d8ffb name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:20:55 old-k8s-version-157431 crio[568]: time="2025-11-26T20:20:55.253840461Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:20:55 old-k8s-version-157431 crio[568]: time="2025-11-26T20:20:55.260871164Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:20:55 old-k8s-version-157431 crio[568]: time="2025-11-26T20:20:55.261318535Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:20:55 old-k8s-version-157431 crio[568]: time="2025-11-26T20:20:55.283581507Z" level=info msg="Created container 55649a60515f7f70c4cad7df626752d394192d9164d58b7e1c46a43be8398fa6: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jqrrz/dashboard-metrics-scraper" id=f5bfe5a1-b1ce-4e68-b731-b72f546d8ffb name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:20:55 old-k8s-version-157431 crio[568]: time="2025-11-26T20:20:55.284118156Z" level=info msg="Starting container: 55649a60515f7f70c4cad7df626752d394192d9164d58b7e1c46a43be8398fa6" id=f9f0a2b1-6230-4fa9-91ed-8543a715e5cf name=/runtime.v1.RuntimeService/StartContainer
	Nov 26 20:20:55 old-k8s-version-157431 crio[568]: time="2025-11-26T20:20:55.285851029Z" level=info msg="Started container" PID=1766 containerID=55649a60515f7f70c4cad7df626752d394192d9164d58b7e1c46a43be8398fa6 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jqrrz/dashboard-metrics-scraper id=f9f0a2b1-6230-4fa9-91ed-8543a715e5cf name=/runtime.v1.RuntimeService/StartContainer sandboxID=d214e4e1535f0ed3867d04ef1dc64416c3bf57faf76db7250d67ba32eb41422a
	Nov 26 20:20:56 old-k8s-version-157431 crio[568]: time="2025-11-26T20:20:56.252422538Z" level=info msg="Removing container: e025c2e925b2866a2de52ab210cdbb69712e60e5b6acfdd84cf87abce99d23f0" id=3c7d0ed3-a4ee-4355-84a5-6a136dab631a name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 26 20:20:56 old-k8s-version-157431 crio[568]: time="2025-11-26T20:20:56.261483021Z" level=info msg="Removed container e025c2e925b2866a2de52ab210cdbb69712e60e5b6acfdd84cf87abce99d23f0: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jqrrz/dashboard-metrics-scraper" id=3c7d0ed3-a4ee-4355-84a5-6a136dab631a name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	55649a60515f7       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           13 seconds ago      Exited              dashboard-metrics-scraper   1                   d214e4e1535f0       dashboard-metrics-scraper-5f989dc9cf-jqrrz       kubernetes-dashboard
	d0ff3b383353d       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   16 seconds ago      Running             kubernetes-dashboard        0                   03ce806a37f77       kubernetes-dashboard-8694d4445c-j28gs            kubernetes-dashboard
	6c6f323935b9b       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           29 seconds ago      Running             coredns                     0                   37c137e30da13       coredns-5dd5756b68-jhrhx                         kube-system
	165a11a2acfbf       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           29 seconds ago      Running             busybox                     1                   e040ebfb68037       busybox                                          default
	accee1a5d908d       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           32 seconds ago      Running             kube-proxy                  0                   4b386be27b3bb       kube-proxy-qqdfx                                 kube-system
	39908a2bff30f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           32 seconds ago      Exited              storage-provisioner         0                   409f8abc87234       storage-provisioner                              kube-system
	16fda5da153ba       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           32 seconds ago      Running             kindnet-cni                 0                   ea0c8984ec22c       kindnet-zlg4b                                    kube-system
	a504f533180fa       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           35 seconds ago      Running             kube-controller-manager     0                   20182ff90baa7       kube-controller-manager-old-k8s-version-157431   kube-system
	d8d8479be421b       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           35 seconds ago      Running             kube-apiserver              0                   bad2a581a38fb       kube-apiserver-old-k8s-version-157431            kube-system
	9646e408ccc61       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           35 seconds ago      Running             etcd                        0                   0114291b18acb       etcd-old-k8s-version-157431                      kube-system
	abbeedf1745d5       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           35 seconds ago      Running             kube-scheduler              0                   5108ec064f552       kube-scheduler-old-k8s-version-157431            kube-system
	
	
	==> coredns [6c6f323935b9bb8fa8c4aab620011fbe3b488cd55cf74ff43d5189c233b9a31a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:55998 - 5028 "HINFO IN 4798133491747807030.4566811342291500471. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.063988172s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-157431
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-157431
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970
	                    minikube.k8s.io/name=old-k8s-version-157431
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_26T20_19_31_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 26 Nov 2025 20:19:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-157431
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 26 Nov 2025 20:20:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 26 Nov 2025 20:20:46 +0000   Wed, 26 Nov 2025 20:19:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 26 Nov 2025 20:20:46 +0000   Wed, 26 Nov 2025 20:19:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 26 Nov 2025 20:20:46 +0000   Wed, 26 Nov 2025 20:19:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 26 Nov 2025 20:20:46 +0000   Wed, 26 Nov 2025 20:20:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-157431
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                55f945af-c138-4761-b59d-13bed6931065
	  Boot ID:                    4ab83601-3d95-4490-96bc-14b416ba2714
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         69s
	  kube-system                 coredns-5dd5756b68-jhrhx                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     85s
	  kube-system                 etcd-old-k8s-version-157431                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         98s
	  kube-system                 kindnet-zlg4b                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      85s
	  kube-system                 kube-apiserver-old-k8s-version-157431             250m (3%)     0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 kube-controller-manager-old-k8s-version-157431    200m (2%)     0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 kube-proxy-qqdfx                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 kube-scheduler-old-k8s-version-157431             100m (1%)     0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         84s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-jqrrz        0 (0%)        0 (0%)      0 (0%)           0 (0%)         20s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-j28gs             0 (0%)        0 (0%)      0 (0%)           0 (0%)         20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 84s                kube-proxy       
	  Normal  Starting                 32s                kube-proxy       
	  Normal  NodeHasSufficientMemory  98s                kubelet          Node old-k8s-version-157431 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    98s                kubelet          Node old-k8s-version-157431 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     98s                kubelet          Node old-k8s-version-157431 status is now: NodeHasSufficientPID
	  Normal  Starting                 98s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           86s                node-controller  Node old-k8s-version-157431 event: Registered Node old-k8s-version-157431 in Controller
	  Normal  NodeReady                73s                kubelet          Node old-k8s-version-157431 status is now: NodeReady
	  Normal  Starting                 36s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  36s (x9 over 36s)  kubelet          Node old-k8s-version-157431 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    36s (x8 over 36s)  kubelet          Node old-k8s-version-157431 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     36s (x7 over 36s)  kubelet          Node old-k8s-version-157431 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           21s                node-controller  Node old-k8s-version-157431 event: Registered Node old-k8s-version-157431 in Controller
	
	
	==> dmesg <==
	[  +0.091832] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023778] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.010162] kauditd_printk_skb: 47 callbacks suppressed
	[Nov26 19:37] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.011177] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.022896] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.024869] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.022915] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.023864] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +2.047793] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +4.032568] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +8.126198] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[ +16.382331] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[Nov26 19:38] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	
	
	==> etcd [9646e408ccc61c8cc3bf687319bc9e134650a818bbe9f04715d5a74998506dc6] <==
	{"level":"info","ts":"2025-11-26T20:20:33.742482Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-26T20:20:33.742498Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-26T20:20:33.742832Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-11-26T20:20:33.742951Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-11-26T20:20:33.743152Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-26T20:20:33.743224Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-26T20:20:33.744518Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-26T20:20:33.744618Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-26T20:20:33.744642Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-26T20:20:33.744747Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-26T20:20:33.744832Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-26T20:20:34.732051Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-26T20:20:34.73209Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-26T20:20:34.732104Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-26T20:20:34.732135Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-11-26T20:20:34.732141Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-11-26T20:20:34.732149Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-11-26T20:20:34.732155Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-11-26T20:20:34.733099Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-157431 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-26T20:20:34.733126Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-26T20:20:34.733113Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-26T20:20:34.733377Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-26T20:20:34.733405Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-26T20:20:34.734527Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-11-26T20:20:34.734537Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 20:21:09 up  1:03,  0 user,  load average: 2.98, 2.93, 1.89
	Linux old-k8s-version-157431 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [16fda5da153ba465a321b8058a258b00de264660a2e64bf42345ae13e96d6044] <==
	I1126 20:20:36.721553       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1126 20:20:36.721760       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1126 20:20:36.721876       1 main.go:148] setting mtu 1500 for CNI 
	I1126 20:20:36.721895       1 main.go:178] kindnetd IP family: "ipv4"
	I1126 20:20:36.721907       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-26T20:20:36Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1126 20:20:36.921678       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1126 20:20:36.921710       1 controller.go:381] "Waiting for informer caches to sync"
	I1126 20:20:36.921722       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1126 20:20:36.922166       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1126 20:20:37.422448       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1126 20:20:37.422488       1 metrics.go:72] Registering metrics
	I1126 20:20:37.422554       1 controller.go:711] "Syncing nftables rules"
	I1126 20:20:46.921936       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1126 20:20:46.922020       1 main.go:301] handling current node
	I1126 20:20:56.922075       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1126 20:20:56.922142       1 main.go:301] handling current node
	I1126 20:21:06.927568       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1126 20:21:06.927597       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d8d8479be421b12e08c13d54e96b2828f60bf10165cebecd5a7fe6990720f66d] <==
	I1126 20:20:35.750670       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1126 20:20:35.750717       1 aggregator.go:166] initial CRD sync complete...
	I1126 20:20:35.750729       1 autoregister_controller.go:141] Starting autoregister controller
	I1126 20:20:35.750737       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1126 20:20:35.750745       1 cache.go:39] Caches are synced for autoregister controller
	I1126 20:20:35.750779       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1126 20:20:35.750863       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1126 20:20:35.750944       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1126 20:20:35.750866       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1126 20:20:35.750873       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1126 20:20:35.751011       1 shared_informer.go:318] Caches are synced for configmaps
	E1126 20:20:35.755813       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1126 20:20:35.783960       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1126 20:20:35.794422       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1126 20:20:36.490515       1 controller.go:624] quota admission added evaluator for: namespaces
	I1126 20:20:36.527090       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1126 20:20:36.546779       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1126 20:20:36.552842       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1126 20:20:36.560713       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1126 20:20:36.592026       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.220.69"}
	I1126 20:20:36.605689       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.173.54"}
	I1126 20:20:36.653688       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1126 20:20:48.797261       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1126 20:20:48.997102       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1126 20:20:49.047112       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [a504f533180fa3660ae26e19df70e0f8b5e648c9eac05e818e5a34d99dfa556d] <==
	I1126 20:20:48.794697       1 shared_informer.go:318] Caches are synced for disruption
	I1126 20:20:48.807804       1 shared_informer.go:318] Caches are synced for crt configmap
	I1126 20:20:48.813044       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I1126 20:20:48.843391       1 shared_informer.go:318] Caches are synced for resource quota
	I1126 20:20:48.902740       1 shared_informer.go:318] Caches are synced for resource quota
	I1126 20:20:49.000502       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1126 20:20:49.001507       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1126 20:20:49.201429       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-j28gs"
	I1126 20:20:49.201826       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-jqrrz"
	I1126 20:20:49.207534       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="207.259884ms"
	I1126 20:20:49.207795       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="206.533497ms"
	I1126 20:20:49.212969       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="5.385867ms"
	I1126 20:20:49.213059       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="54.843µs"
	I1126 20:20:49.214129       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="6.287683ms"
	I1126 20:20:49.214208       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="43.886µs"
	I1126 20:20:49.218021       1 shared_informer.go:318] Caches are synced for garbage collector
	I1126 20:20:49.218201       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="49.288µs"
	I1126 20:20:49.224825       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="38.935µs"
	I1126 20:20:49.244482       1 shared_informer.go:318] Caches are synced for garbage collector
	I1126 20:20:49.244501       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1126 20:20:53.260420       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="6.553237ms"
	I1126 20:20:53.260536       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="72.139µs"
	I1126 20:20:55.256592       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="63.614µs"
	I1126 20:20:56.262187       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="73.842µs"
	I1126 20:20:57.264403       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="108.476µs"
	
	
	==> kube-proxy [accee1a5d908d0156207ba0fe0bd3d00a19a4cdf28557c3848fd5a55e1fa67e5] <==
	I1126 20:20:36.569106       1 server_others.go:69] "Using iptables proxy"
	I1126 20:20:36.578316       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1126 20:20:36.598855       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1126 20:20:36.601351       1 server_others.go:152] "Using iptables Proxier"
	I1126 20:20:36.601377       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1126 20:20:36.601384       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1126 20:20:36.601414       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1126 20:20:36.601669       1 server.go:846] "Version info" version="v1.28.0"
	I1126 20:20:36.601686       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 20:20:36.602277       1 config.go:188] "Starting service config controller"
	I1126 20:20:36.602308       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1126 20:20:36.602285       1 config.go:97] "Starting endpoint slice config controller"
	I1126 20:20:36.602341       1 config.go:315] "Starting node config controller"
	I1126 20:20:36.602354       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1126 20:20:36.602365       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1126 20:20:36.702802       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1126 20:20:36.702827       1 shared_informer.go:318] Caches are synced for node config
	I1126 20:20:36.702811       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [abbeedf1745d559c442cf4064712a5660d93918ca1d77d450979a8d7cb48fd5b] <==
	I1126 20:20:34.139562       1 serving.go:348] Generated self-signed cert in-memory
	W1126 20:20:35.675992       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1126 20:20:35.676025       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1126 20:20:35.676039       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1126 20:20:35.676049       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1126 20:20:35.711129       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1126 20:20:35.711166       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 20:20:35.712832       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1126 20:20:35.712881       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1126 20:20:35.713988       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1126 20:20:35.714064       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1126 20:20:35.813672       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 26 20:20:37 old-k8s-version-157431 kubelet[735]: E1126 20:20:37.700245     735 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 26 20:20:37 old-k8s-version-157431 kubelet[735]: E1126 20:20:37.700342     735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/483a52cf-1d0a-4b51-b9b1-d986b07fa545-config-volume podName:483a52cf-1d0a-4b51-b9b1-d986b07fa545 nodeName:}" failed. No retries permitted until 2025-11-26 20:20:39.700319627 +0000 UTC m=+6.605603985 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/483a52cf-1d0a-4b51-b9b1-d986b07fa545-config-volume") pod "coredns-5dd5756b68-jhrhx" (UID: "483a52cf-1d0a-4b51-b9b1-d986b07fa545") : object "kube-system"/"coredns" not registered
	Nov 26 20:20:37 old-k8s-version-157431 kubelet[735]: E1126 20:20:37.901339     735 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Nov 26 20:20:37 old-k8s-version-157431 kubelet[735]: E1126 20:20:37.901371     735 projected.go:198] Error preparing data for projected volume kube-api-access-kgqzr for pod default/busybox: object "default"/"kube-root-ca.crt" not registered
	Nov 26 20:20:37 old-k8s-version-157431 kubelet[735]: E1126 20:20:37.901430     735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d6c41f35-cc7b-423c-b8e2-76531e7a8b3b-kube-api-access-kgqzr podName:d6c41f35-cc7b-423c-b8e2-76531e7a8b3b nodeName:}" failed. No retries permitted until 2025-11-26 20:20:39.901415518 +0000 UTC m=+6.806699872 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-kgqzr" (UniqueName: "kubernetes.io/projected/d6c41f35-cc7b-423c-b8e2-76531e7a8b3b-kube-api-access-kgqzr") pod "busybox" (UID: "d6c41f35-cc7b-423c-b8e2-76531e7a8b3b") : object "default"/"kube-root-ca.crt" not registered
	Nov 26 20:20:41 old-k8s-version-157431 kubelet[735]: I1126 20:20:41.570413     735 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 26 20:20:49 old-k8s-version-157431 kubelet[735]: I1126 20:20:49.208235     735 topology_manager.go:215] "Topology Admit Handler" podUID="8c0023d6-bea5-44e3-bfee-5f411cad2ae6" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-jqrrz"
	Nov 26 20:20:49 old-k8s-version-157431 kubelet[735]: I1126 20:20:49.209952     735 topology_manager.go:215] "Topology Admit Handler" podUID="bcb842e0-68ab-415a-9899-b57f19282469" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-j28gs"
	Nov 26 20:20:49 old-k8s-version-157431 kubelet[735]: I1126 20:20:49.259348     735 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpxr8\" (UniqueName: \"kubernetes.io/projected/bcb842e0-68ab-415a-9899-b57f19282469-kube-api-access-wpxr8\") pod \"kubernetes-dashboard-8694d4445c-j28gs\" (UID: \"bcb842e0-68ab-415a-9899-b57f19282469\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-j28gs"
	Nov 26 20:20:49 old-k8s-version-157431 kubelet[735]: I1126 20:20:49.259393     735 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66dkk\" (UniqueName: \"kubernetes.io/projected/8c0023d6-bea5-44e3-bfee-5f411cad2ae6-kube-api-access-66dkk\") pod \"dashboard-metrics-scraper-5f989dc9cf-jqrrz\" (UID: \"8c0023d6-bea5-44e3-bfee-5f411cad2ae6\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jqrrz"
	Nov 26 20:20:49 old-k8s-version-157431 kubelet[735]: I1126 20:20:49.259416     735 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/bcb842e0-68ab-415a-9899-b57f19282469-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-j28gs\" (UID: \"bcb842e0-68ab-415a-9899-b57f19282469\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-j28gs"
	Nov 26 20:20:49 old-k8s-version-157431 kubelet[735]: I1126 20:20:49.259443     735 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/8c0023d6-bea5-44e3-bfee-5f411cad2ae6-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-jqrrz\" (UID: \"8c0023d6-bea5-44e3-bfee-5f411cad2ae6\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jqrrz"
	Nov 26 20:20:53 old-k8s-version-157431 kubelet[735]: I1126 20:20:53.253512     735 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-j28gs" podStartSLOduration=0.968626714 podCreationTimestamp="2025-11-26 20:20:49 +0000 UTC" firstStartedPulling="2025-11-26 20:20:49.531155338 +0000 UTC m=+16.436439705" lastFinishedPulling="2025-11-26 20:20:52.815952854 +0000 UTC m=+19.721237221" observedRunningTime="2025-11-26 20:20:53.253366192 +0000 UTC m=+20.158650567" watchObservedRunningTime="2025-11-26 20:20:53.25342423 +0000 UTC m=+20.158708605"
	Nov 26 20:20:55 old-k8s-version-157431 kubelet[735]: I1126 20:20:55.246689     735 scope.go:117] "RemoveContainer" containerID="e025c2e925b2866a2de52ab210cdbb69712e60e5b6acfdd84cf87abce99d23f0"
	Nov 26 20:20:56 old-k8s-version-157431 kubelet[735]: I1126 20:20:56.251131     735 scope.go:117] "RemoveContainer" containerID="e025c2e925b2866a2de52ab210cdbb69712e60e5b6acfdd84cf87abce99d23f0"
	Nov 26 20:20:56 old-k8s-version-157431 kubelet[735]: I1126 20:20:56.251310     735 scope.go:117] "RemoveContainer" containerID="55649a60515f7f70c4cad7df626752d394192d9164d58b7e1c46a43be8398fa6"
	Nov 26 20:20:56 old-k8s-version-157431 kubelet[735]: E1126 20:20:56.251718     735 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-jqrrz_kubernetes-dashboard(8c0023d6-bea5-44e3-bfee-5f411cad2ae6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jqrrz" podUID="8c0023d6-bea5-44e3-bfee-5f411cad2ae6"
	Nov 26 20:20:57 old-k8s-version-157431 kubelet[735]: I1126 20:20:57.254571     735 scope.go:117] "RemoveContainer" containerID="55649a60515f7f70c4cad7df626752d394192d9164d58b7e1c46a43be8398fa6"
	Nov 26 20:20:57 old-k8s-version-157431 kubelet[735]: E1126 20:20:57.254863     735 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-jqrrz_kubernetes-dashboard(8c0023d6-bea5-44e3-bfee-5f411cad2ae6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jqrrz" podUID="8c0023d6-bea5-44e3-bfee-5f411cad2ae6"
	Nov 26 20:20:59 old-k8s-version-157431 kubelet[735]: I1126 20:20:59.509380     735 scope.go:117] "RemoveContainer" containerID="55649a60515f7f70c4cad7df626752d394192d9164d58b7e1c46a43be8398fa6"
	Nov 26 20:20:59 old-k8s-version-157431 kubelet[735]: E1126 20:20:59.509858     735 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-jqrrz_kubernetes-dashboard(8c0023d6-bea5-44e3-bfee-5f411cad2ae6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jqrrz" podUID="8c0023d6-bea5-44e3-bfee-5f411cad2ae6"
	Nov 26 20:21:04 old-k8s-version-157431 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 26 20:21:04 old-k8s-version-157431 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 26 20:21:04 old-k8s-version-157431 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 26 20:21:04 old-k8s-version-157431 systemd[1]: kubelet.service: Consumed 1.024s CPU time.
	
	
	==> kubernetes-dashboard [d0ff3b383353d86e05f922635c797f096102d6ebe358f4b279634a688ecbe2fb] <==
	2025/11/26 20:20:52 Using namespace: kubernetes-dashboard
	2025/11/26 20:20:52 Using in-cluster config to connect to apiserver
	2025/11/26 20:20:52 Using secret token for csrf signing
	2025/11/26 20:20:52 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/26 20:20:52 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/26 20:20:52 Successful initial request to the apiserver, version: v1.28.0
	2025/11/26 20:20:52 Generating JWE encryption key
	2025/11/26 20:20:52 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/26 20:20:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/26 20:20:53 Initializing JWE encryption key from synchronized object
	2025/11/26 20:20:53 Creating in-cluster Sidecar client
	2025/11/26 20:20:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/26 20:20:53 Serving insecurely on HTTP port: 9090
	2025/11/26 20:20:52 Starting overwatch
	
	
	==> storage-provisioner [39908a2bff30f053ae4e66e2b1f1ed4846ab95ad78050fe44589cb96f7e2ddb9] <==
	I1126 20:20:36.533276       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1126 20:21:06.536921       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-157431 -n old-k8s-version-157431
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-157431 -n old-k8s-version-157431: exit status 2 (332.196786ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-157431 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (5.77s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-026579 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-026579 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (251.205083ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:22:13Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-026579 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-026579 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-026579 describe deploy/metrics-server -n kube-system: exit status 1 (62.505209ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-026579 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-026579
helpers_test.go:243: (dbg) docker inspect no-preload-026579:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9844cee89f7d44f0de3996dfc6f6df4e68130d5e18c7a43d8cc08597e0863f32",
	        "Created": "2025-11-26T20:21:13.866220209Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 257022,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-26T20:21:13.894995694Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/9844cee89f7d44f0de3996dfc6f6df4e68130d5e18c7a43d8cc08597e0863f32/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9844cee89f7d44f0de3996dfc6f6df4e68130d5e18c7a43d8cc08597e0863f32/hostname",
	        "HostsPath": "/var/lib/docker/containers/9844cee89f7d44f0de3996dfc6f6df4e68130d5e18c7a43d8cc08597e0863f32/hosts",
	        "LogPath": "/var/lib/docker/containers/9844cee89f7d44f0de3996dfc6f6df4e68130d5e18c7a43d8cc08597e0863f32/9844cee89f7d44f0de3996dfc6f6df4e68130d5e18c7a43d8cc08597e0863f32-json.log",
	        "Name": "/no-preload-026579",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-026579:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-026579",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9844cee89f7d44f0de3996dfc6f6df4e68130d5e18c7a43d8cc08597e0863f32",
	                "LowerDir": "/var/lib/docker/overlay2/ba0da6391fe48e2d4ac14de16184303d8e1d2450a851e687b5256f0e38c43759-init/diff:/var/lib/docker/overlay2/1661d775281cb7314a654558ee2c4e5880ffafb3a27a085032449cd60a68753a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ba0da6391fe48e2d4ac14de16184303d8e1d2450a851e687b5256f0e38c43759/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ba0da6391fe48e2d4ac14de16184303d8e1d2450a851e687b5256f0e38c43759/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ba0da6391fe48e2d4ac14de16184303d8e1d2450a851e687b5256f0e38c43759/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-026579",
	                "Source": "/var/lib/docker/volumes/no-preload-026579/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-026579",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-026579",
	                "name.minikube.sigs.k8s.io": "no-preload-026579",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "469ed259104d7c6970d9d9d6200926431255fde224d1226e2f3bf700840410dd",
	            "SandboxKey": "/var/run/docker/netns/469ed259104d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-026579": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2ae6f13df7ae90e563079e045184e161803a9312deeafb40deb6a3cda467fd0e",
	                    "EndpointID": "66a34f64f1dd7efdb4232a0e34a36074081fbec44174be7f0f076c7e524d9c58",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "9a:24:91:97:80:33",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-026579",
	                        "9844cee89f7d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-026579 -n no-preload-026579
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-026579 logs -n 25
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p cert-options-706331 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-706331          │ jenkins │ v1.37.0 │ 26 Nov 25 20:18 UTC │ 26 Nov 25 20:19 UTC │
	│ ssh     │ cert-options-706331 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-706331          │ jenkins │ v1.37.0 │ 26 Nov 25 20:19 UTC │ 26 Nov 25 20:19 UTC │
	│ ssh     │ -p cert-options-706331 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-706331          │ jenkins │ v1.37.0 │ 26 Nov 25 20:19 UTC │ 26 Nov 25 20:19 UTC │
	│ delete  │ -p cert-options-706331                                                                                                                                                                                                                        │ cert-options-706331          │ jenkins │ v1.37.0 │ 26 Nov 25 20:19 UTC │ 26 Nov 25 20:19 UTC │
	│ start   │ -p old-k8s-version-157431 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-157431       │ jenkins │ v1.37.0 │ 26 Nov 25 20:19 UTC │ 26 Nov 25 20:19 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-157431 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-157431       │ jenkins │ v1.37.0 │ 26 Nov 25 20:20 UTC │                     │
	│ stop    │ -p old-k8s-version-157431 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-157431       │ jenkins │ v1.37.0 │ 26 Nov 25 20:20 UTC │ 26 Nov 25 20:20 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-157431 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-157431       │ jenkins │ v1.37.0 │ 26 Nov 25 20:20 UTC │ 26 Nov 25 20:20 UTC │
	│ start   │ -p old-k8s-version-157431 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-157431       │ jenkins │ v1.37.0 │ 26 Nov 25 20:20 UTC │ 26 Nov 25 20:20 UTC │
	│ image   │ old-k8s-version-157431 image list --format=json                                                                                                                                                                                               │ old-k8s-version-157431       │ jenkins │ v1.37.0 │ 26 Nov 25 20:21 UTC │ 26 Nov 25 20:21 UTC │
	│ pause   │ -p old-k8s-version-157431 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-157431       │ jenkins │ v1.37.0 │ 26 Nov 25 20:21 UTC │                     │
	│ delete  │ -p old-k8s-version-157431                                                                                                                                                                                                                     │ old-k8s-version-157431       │ jenkins │ v1.37.0 │ 26 Nov 25 20:21 UTC │ 26 Nov 25 20:21 UTC │
	│ delete  │ -p old-k8s-version-157431                                                                                                                                                                                                                     │ old-k8s-version-157431       │ jenkins │ v1.37.0 │ 26 Nov 25 20:21 UTC │ 26 Nov 25 20:21 UTC │
	│ start   │ -p no-preload-026579 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-026579            │ jenkins │ v1.37.0 │ 26 Nov 25 20:21 UTC │ 26 Nov 25 20:22 UTC │
	│ start   │ -p cert-expiration-571738 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-571738       │ jenkins │ v1.37.0 │ 26 Nov 25 20:21 UTC │ 26 Nov 25 20:21 UTC │
	│ delete  │ -p cert-expiration-571738                                                                                                                                                                                                                     │ cert-expiration-571738       │ jenkins │ v1.37.0 │ 26 Nov 25 20:21 UTC │ 26 Nov 25 20:21 UTC │
	│ start   │ -p embed-certs-949294 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-949294           │ jenkins │ v1.37.0 │ 26 Nov 25 20:21 UTC │ 26 Nov 25 20:22 UTC │
	│ start   │ -p kubernetes-upgrade-225144 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-225144    │ jenkins │ v1.37.0 │ 26 Nov 25 20:21 UTC │                     │
	│ start   │ -p kubernetes-upgrade-225144 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-225144    │ jenkins │ v1.37.0 │ 26 Nov 25 20:21 UTC │ 26 Nov 25 20:22 UTC │
	│ delete  │ -p stopped-upgrade-211103                                                                                                                                                                                                                     │ stopped-upgrade-211103       │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ delete  │ -p kubernetes-upgrade-225144                                                                                                                                                                                                                  │ kubernetes-upgrade-225144    │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ delete  │ -p disable-driver-mounts-221304                                                                                                                                                                                                               │ disable-driver-mounts-221304 │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ start   │ -p default-k8s-diff-port-178152 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-178152 │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │                     │
	│ start   │ -p newest-cni-297942 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-297942            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-026579 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-026579            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/26 20:22:05
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1126 20:22:05.470014  271769 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:22:05.470138  271769 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:22:05.470150  271769 out.go:374] Setting ErrFile to fd 2...
	I1126 20:22:05.470157  271769 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:22:05.470439  271769 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-10722/.minikube/bin
	I1126 20:22:05.471066  271769 out.go:368] Setting JSON to false
	I1126 20:22:05.472553  271769 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3875,"bootTime":1764184650,"procs":374,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1126 20:22:05.472612  271769 start.go:143] virtualization: kvm guest
	I1126 20:22:05.474366  271769 out.go:179] * [newest-cni-297942] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1126 20:22:05.476213  271769 out.go:179]   - MINIKUBE_LOCATION=21974
	I1126 20:22:05.476251  271769 notify.go:221] Checking for updates...
	I1126 20:22:05.478287  271769 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1126 20:22:05.479637  271769 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21974-10722/kubeconfig
	I1126 20:22:05.480781  271769 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-10722/.minikube
	I1126 20:22:05.482026  271769 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1126 20:22:05.483310  271769 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1126 20:22:05.485154  271769 config.go:182] Loaded profile config "default-k8s-diff-port-178152": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:22:05.485292  271769 config.go:182] Loaded profile config "embed-certs-949294": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:22:05.485421  271769 config.go:182] Loaded profile config "no-preload-026579": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:22:05.485567  271769 driver.go:422] Setting default libvirt URI to qemu:///system
	I1126 20:22:05.514350  271769 docker.go:124] docker version: linux-29.0.4:Docker Engine - Community
	I1126 20:22:05.514516  271769 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:22:05.577974  271769 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:55 OomKillDisable:false NGoroutines:71 SystemTime:2025-11-26 20:22:05.566964083 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1126 20:22:05.578082  271769 docker.go:319] overlay module found
	I1126 20:22:05.579912  271769 out.go:179] * Using the docker driver based on user configuration
	I1126 20:22:05.580962  271769 start.go:309] selected driver: docker
	I1126 20:22:05.580974  271769 start.go:927] validating driver "docker" against <nil>
	I1126 20:22:05.580984  271769 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1126 20:22:05.581559  271769 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:22:05.645595  271769 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:61 OomKillDisable:false NGoroutines:86 SystemTime:2025-11-26 20:22:05.634039842 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1126 20:22:05.645821  271769 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1126 20:22:05.645863  271769 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1126 20:22:05.646177  271769 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1126 20:22:05.648392  271769 out.go:179] * Using Docker driver with root privileges
	I1126 20:22:05.649430  271769 cni.go:84] Creating CNI manager for ""
	I1126 20:22:05.649513  271769 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:22:05.649528  271769 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1126 20:22:05.649592  271769 start.go:353] cluster config:
	{Name:newest-cni-297942 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-297942 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:22:05.650829  271769 out.go:179] * Starting "newest-cni-297942" primary control-plane node in "newest-cni-297942" cluster
	I1126 20:22:05.651898  271769 cache.go:134] Beginning downloading kic base image for docker with crio
	I1126 20:22:05.653019  271769 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1126 20:22:05.653979  271769 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:22:05.654006  271769 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21974-10722/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1126 20:22:05.654014  271769 cache.go:65] Caching tarball of preloaded images
	I1126 20:22:05.654069  271769 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1126 20:22:05.654092  271769 preload.go:238] Found /home/jenkins/minikube-integration/21974-10722/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1126 20:22:05.654105  271769 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1126 20:22:05.654204  271769 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/newest-cni-297942/config.json ...
	I1126 20:22:05.654223  271769 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/newest-cni-297942/config.json: {Name:mke134f9dc36b8353ce5e4cfc96424f05d910165 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:22:05.676290  271769 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1126 20:22:05.676312  271769 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1126 20:22:05.676329  271769 cache.go:243] Successfully downloaded all kic artifacts
	I1126 20:22:05.676359  271769 start.go:360] acquireMachinesLock for newest-cni-297942: {Name:mkec4aea2213ece57272965b7ad56143d17ef93a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1126 20:22:05.676452  271769 start.go:364] duration metric: took 70.692µs to acquireMachinesLock for "newest-cni-297942"
	I1126 20:22:05.676495  271769 start.go:93] Provisioning new machine with config: &{Name:newest-cni-297942 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-297942 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1126 20:22:05.676600  271769 start.go:125] createHost starting for "" (driver="docker")
	I1126 20:22:06.006640  261431 node_ready.go:49] node "embed-certs-949294" is "Ready"
	I1126 20:22:06.006675  261431 node_ready.go:38] duration metric: took 11.505308269s for node "embed-certs-949294" to be "Ready" ...
	I1126 20:22:06.006693  261431 api_server.go:52] waiting for apiserver process to appear ...
	I1126 20:22:06.006753  261431 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:22:06.026499  261431 api_server.go:72] duration metric: took 11.815858466s to wait for apiserver process to appear ...
	I1126 20:22:06.026533  261431 api_server.go:88] waiting for apiserver healthz status ...
	I1126 20:22:06.026553  261431 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1126 20:22:06.032814  261431 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1126 20:22:06.033944  261431 api_server.go:141] control plane version: v1.34.1
	I1126 20:22:06.033997  261431 api_server.go:131] duration metric: took 7.436748ms to wait for apiserver health ...
	I1126 20:22:06.034023  261431 system_pods.go:43] waiting for kube-system pods to appear ...
	I1126 20:22:06.037638  261431 system_pods.go:59] 8 kube-system pods found
	I1126 20:22:06.037665  261431 system_pods.go:61] "coredns-66bc5c9577-s8rrr" [fb1d5777-ea89-40cb-ae5c-dd8bde47f3de] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:22:06.037671  261431 system_pods.go:61] "etcd-embed-certs-949294" [ae44b779-edef-49c6-8b68-b6114e9a4d68] Running
	I1126 20:22:06.037676  261431 system_pods.go:61] "kindnet-9546l" [5f44d5e6-677c-4df7-9534-bfdf1e6b06b4] Running
	I1126 20:22:06.037679  261431 system_pods.go:61] "kube-apiserver-embed-certs-949294" [25d397fc-5140-407f-80f8-e05072c2d44b] Running
	I1126 20:22:06.037683  261431 system_pods.go:61] "kube-controller-manager-embed-certs-949294" [9f206785-00f8-4fb7-a52c-95ea02516271] Running
	I1126 20:22:06.037695  261431 system_pods.go:61] "kube-proxy-qnjvr" [d9dba8a9-9c13-46e2-9ada-a2b8daca8d73] Running
	I1126 20:22:06.037702  261431 system_pods.go:61] "kube-scheduler-embed-certs-949294" [26c4da52-2b0a-4a83-9549-de780ffeddb0] Running
	I1126 20:22:06.037704  261431 system_pods.go:61] "storage-provisioner" [ad12d5e5-d681-4dfc-9970-d2340ac55ed7] Pending
	I1126 20:22:06.037710  261431 system_pods.go:74] duration metric: took 3.674285ms to wait for pod list to return data ...
	I1126 20:22:06.037725  261431 default_sa.go:34] waiting for default service account to be created ...
	I1126 20:22:06.044500  261431 default_sa.go:45] found service account: "default"
	I1126 20:22:06.044522  261431 default_sa.go:55] duration metric: took 6.791361ms for default service account to be created ...
	I1126 20:22:06.044532  261431 system_pods.go:116] waiting for k8s-apps to be running ...
	I1126 20:22:06.047723  261431 system_pods.go:86] 8 kube-system pods found
	I1126 20:22:06.047754  261431 system_pods.go:89] "coredns-66bc5c9577-s8rrr" [fb1d5777-ea89-40cb-ae5c-dd8bde47f3de] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:22:06.047763  261431 system_pods.go:89] "etcd-embed-certs-949294" [ae44b779-edef-49c6-8b68-b6114e9a4d68] Running
	I1126 20:22:06.047772  261431 system_pods.go:89] "kindnet-9546l" [5f44d5e6-677c-4df7-9534-bfdf1e6b06b4] Running
	I1126 20:22:06.047781  261431 system_pods.go:89] "kube-apiserver-embed-certs-949294" [25d397fc-5140-407f-80f8-e05072c2d44b] Running
	I1126 20:22:06.047787  261431 system_pods.go:89] "kube-controller-manager-embed-certs-949294" [9f206785-00f8-4fb7-a52c-95ea02516271] Running
	I1126 20:22:06.047792  261431 system_pods.go:89] "kube-proxy-qnjvr" [d9dba8a9-9c13-46e2-9ada-a2b8daca8d73] Running
	I1126 20:22:06.047797  261431 system_pods.go:89] "kube-scheduler-embed-certs-949294" [26c4da52-2b0a-4a83-9549-de780ffeddb0] Running
	I1126 20:22:06.047807  261431 system_pods.go:89] "storage-provisioner" [ad12d5e5-d681-4dfc-9970-d2340ac55ed7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 20:22:06.047829  261431 retry.go:31] will retry after 263.412245ms: missing components: kube-dns
	I1126 20:22:06.316625  261431 system_pods.go:86] 8 kube-system pods found
	I1126 20:22:06.316671  261431 system_pods.go:89] "coredns-66bc5c9577-s8rrr" [fb1d5777-ea89-40cb-ae5c-dd8bde47f3de] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:22:06.316679  261431 system_pods.go:89] "etcd-embed-certs-949294" [ae44b779-edef-49c6-8b68-b6114e9a4d68] Running
	I1126 20:22:06.316701  261431 system_pods.go:89] "kindnet-9546l" [5f44d5e6-677c-4df7-9534-bfdf1e6b06b4] Running
	I1126 20:22:06.316711  261431 system_pods.go:89] "kube-apiserver-embed-certs-949294" [25d397fc-5140-407f-80f8-e05072c2d44b] Running
	I1126 20:22:06.316717  261431 system_pods.go:89] "kube-controller-manager-embed-certs-949294" [9f206785-00f8-4fb7-a52c-95ea02516271] Running
	I1126 20:22:06.316723  261431 system_pods.go:89] "kube-proxy-qnjvr" [d9dba8a9-9c13-46e2-9ada-a2b8daca8d73] Running
	I1126 20:22:06.316741  261431 system_pods.go:89] "kube-scheduler-embed-certs-949294" [26c4da52-2b0a-4a83-9549-de780ffeddb0] Running
	I1126 20:22:06.316749  261431 system_pods.go:89] "storage-provisioner" [ad12d5e5-d681-4dfc-9970-d2340ac55ed7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 20:22:06.316766  261431 retry.go:31] will retry after 364.189329ms: missing components: kube-dns
	I1126 20:22:06.685257  261431 system_pods.go:86] 8 kube-system pods found
	I1126 20:22:06.685290  261431 system_pods.go:89] "coredns-66bc5c9577-s8rrr" [fb1d5777-ea89-40cb-ae5c-dd8bde47f3de] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:22:06.685296  261431 system_pods.go:89] "etcd-embed-certs-949294" [ae44b779-edef-49c6-8b68-b6114e9a4d68] Running
	I1126 20:22:06.685301  261431 system_pods.go:89] "kindnet-9546l" [5f44d5e6-677c-4df7-9534-bfdf1e6b06b4] Running
	I1126 20:22:06.685306  261431 system_pods.go:89] "kube-apiserver-embed-certs-949294" [25d397fc-5140-407f-80f8-e05072c2d44b] Running
	I1126 20:22:06.685310  261431 system_pods.go:89] "kube-controller-manager-embed-certs-949294" [9f206785-00f8-4fb7-a52c-95ea02516271] Running
	I1126 20:22:06.685314  261431 system_pods.go:89] "kube-proxy-qnjvr" [d9dba8a9-9c13-46e2-9ada-a2b8daca8d73] Running
	I1126 20:22:06.685317  261431 system_pods.go:89] "kube-scheduler-embed-certs-949294" [26c4da52-2b0a-4a83-9549-de780ffeddb0] Running
	I1126 20:22:06.685321  261431 system_pods.go:89] "storage-provisioner" [ad12d5e5-d681-4dfc-9970-d2340ac55ed7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 20:22:06.685336  261431 retry.go:31] will retry after 366.846011ms: missing components: kube-dns
	I1126 20:22:07.109420  261431 system_pods.go:86] 8 kube-system pods found
	I1126 20:22:07.109483  261431 system_pods.go:89] "coredns-66bc5c9577-s8rrr" [fb1d5777-ea89-40cb-ae5c-dd8bde47f3de] Running
	I1126 20:22:07.109495  261431 system_pods.go:89] "etcd-embed-certs-949294" [ae44b779-edef-49c6-8b68-b6114e9a4d68] Running
	I1126 20:22:07.109501  261431 system_pods.go:89] "kindnet-9546l" [5f44d5e6-677c-4df7-9534-bfdf1e6b06b4] Running
	I1126 20:22:07.109508  261431 system_pods.go:89] "kube-apiserver-embed-certs-949294" [25d397fc-5140-407f-80f8-e05072c2d44b] Running
	I1126 20:22:07.109519  261431 system_pods.go:89] "kube-controller-manager-embed-certs-949294" [9f206785-00f8-4fb7-a52c-95ea02516271] Running
	I1126 20:22:07.109525  261431 system_pods.go:89] "kube-proxy-qnjvr" [d9dba8a9-9c13-46e2-9ada-a2b8daca8d73] Running
	I1126 20:22:07.109539  261431 system_pods.go:89] "kube-scheduler-embed-certs-949294" [26c4da52-2b0a-4a83-9549-de780ffeddb0] Running
	I1126 20:22:07.109546  261431 system_pods.go:89] "storage-provisioner" [ad12d5e5-d681-4dfc-9970-d2340ac55ed7] Running
	I1126 20:22:07.109558  261431 system_pods.go:126] duration metric: took 1.065018599s to wait for k8s-apps to be running ...
	I1126 20:22:07.109572  261431 system_svc.go:44] waiting for kubelet service to be running ....
	I1126 20:22:07.109629  261431 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:22:07.123369  261431 system_svc.go:56] duration metric: took 13.789166ms WaitForService to wait for kubelet
	I1126 20:22:07.123397  261431 kubeadm.go:587] duration metric: took 12.912763607s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1126 20:22:07.123420  261431 node_conditions.go:102] verifying NodePressure condition ...
	I1126 20:22:07.126595  261431 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1126 20:22:07.126619  261431 node_conditions.go:123] node cpu capacity is 8
	I1126 20:22:07.126631  261431 node_conditions.go:105] duration metric: took 3.206434ms to run NodePressure ...
	I1126 20:22:07.126642  261431 start.go:242] waiting for startup goroutines ...
	I1126 20:22:07.126649  261431 start.go:247] waiting for cluster config update ...
	I1126 20:22:07.126658  261431 start.go:256] writing updated cluster config ...
	I1126 20:22:07.126885  261431 ssh_runner.go:195] Run: rm -f paused
	I1126 20:22:07.130655  261431 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 20:22:07.134067  261431 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-s8rrr" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:22:07.138560  261431 pod_ready.go:94] pod "coredns-66bc5c9577-s8rrr" is "Ready"
	I1126 20:22:07.138587  261431 pod_ready.go:86] duration metric: took 4.494796ms for pod "coredns-66bc5c9577-s8rrr" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:22:07.330386  261431 pod_ready.go:83] waiting for pod "etcd-embed-certs-949294" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:22:07.334692  261431 pod_ready.go:94] pod "etcd-embed-certs-949294" is "Ready"
	I1126 20:22:07.334715  261431 pod_ready.go:86] duration metric: took 4.301929ms for pod "etcd-embed-certs-949294" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:22:07.336593  261431 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-949294" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:22:07.340276  261431 pod_ready.go:94] pod "kube-apiserver-embed-certs-949294" is "Ready"
	I1126 20:22:07.340294  261431 pod_ready.go:86] duration metric: took 3.682877ms for pod "kube-apiserver-embed-certs-949294" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:22:07.342010  261431 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-949294" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:22:07.534442  261431 pod_ready.go:94] pod "kube-controller-manager-embed-certs-949294" is "Ready"
	I1126 20:22:07.534492  261431 pod_ready.go:86] duration metric: took 192.457774ms for pod "kube-controller-manager-embed-certs-949294" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:22:07.753667  261431 pod_ready.go:83] waiting for pod "kube-proxy-qnjvr" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:22:08.134178  261431 pod_ready.go:94] pod "kube-proxy-qnjvr" is "Ready"
	I1126 20:22:08.134204  261431 pod_ready.go:86] duration metric: took 380.510028ms for pod "kube-proxy-qnjvr" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:22:08.335500  261431 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-949294" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:22:08.735665  261431 pod_ready.go:94] pod "kube-scheduler-embed-certs-949294" is "Ready"
	I1126 20:22:08.735692  261431 pod_ready.go:86] duration metric: took 400.16533ms for pod "kube-scheduler-embed-certs-949294" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:22:08.735708  261431 pod_ready.go:40] duration metric: took 1.605025098s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 20:22:08.783109  261431 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1126 20:22:08.784962  261431 out.go:179] * Done! kubectl is now configured to use "embed-certs-949294" cluster and "default" namespace by default
	I1126 20:22:04.827324  271308 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1126 20:22:04.827561  271308 start.go:159] libmachine.API.Create for "default-k8s-diff-port-178152" (driver="docker")
	I1126 20:22:04.827593  271308 client.go:173] LocalClient.Create starting
	I1126 20:22:04.827679  271308 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem
	I1126 20:22:04.827716  271308 main.go:143] libmachine: Decoding PEM data...
	I1126 20:22:04.827738  271308 main.go:143] libmachine: Parsing certificate...
	I1126 20:22:04.827802  271308 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem
	I1126 20:22:04.827824  271308 main.go:143] libmachine: Decoding PEM data...
	I1126 20:22:04.827838  271308 main.go:143] libmachine: Parsing certificate...
	I1126 20:22:04.828220  271308 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-178152 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1126 20:22:04.847528  271308 cli_runner.go:211] docker network inspect default-k8s-diff-port-178152 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1126 20:22:04.847600  271308 network_create.go:284] running [docker network inspect default-k8s-diff-port-178152] to gather additional debugging logs...
	I1126 20:22:04.847623  271308 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-178152
	W1126 20:22:04.867905  271308 cli_runner.go:211] docker network inspect default-k8s-diff-port-178152 returned with exit code 1
	I1126 20:22:04.867944  271308 network_create.go:287] error running [docker network inspect default-k8s-diff-port-178152]: docker network inspect default-k8s-diff-port-178152: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-178152 not found
	I1126 20:22:04.867966  271308 network_create.go:289] output of [docker network inspect default-k8s-diff-port-178152]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-178152 not found
	
	** /stderr **
	I1126 20:22:04.868064  271308 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1126 20:22:04.890185  271308 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9f2fbfec5d4b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:aa:65:0e:d3:0d:13} reservation:<nil>}
	I1126 20:22:04.891106  271308 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bbc6aaddce08 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3a:93:ea:3f:0d:7e} reservation:<nil>}
	I1126 20:22:04.891952  271308 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ecd673f5b2f2 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:de:d0:57:1a:06:91} reservation:<nil>}
	I1126 20:22:04.892738  271308 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-2ae6f13df7ae IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ba:83:a1:96:dc:99} reservation:<nil>}
	I1126 20:22:04.893728  271308 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ec8b70}
	I1126 20:22:04.893757  271308 network_create.go:124] attempt to create docker network default-k8s-diff-port-178152 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1126 20:22:04.893806  271308 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-178152 default-k8s-diff-port-178152
	I1126 20:22:04.942722  271308 network_create.go:108] docker network default-k8s-diff-port-178152 192.168.85.0/24 created
	I1126 20:22:04.942755  271308 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-178152" container
	I1126 20:22:04.942806  271308 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1126 20:22:04.960716  271308 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-178152 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-178152 --label created_by.minikube.sigs.k8s.io=true
	I1126 20:22:05.178134  271308 oci.go:103] Successfully created a docker volume default-k8s-diff-port-178152
	I1126 20:22:05.178212  271308 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-178152-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-178152 --entrypoint /usr/bin/test -v default-k8s-diff-port-178152:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1126 20:22:05.578136  271308 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-178152
	I1126 20:22:05.578202  271308 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:22:05.578220  271308 kic.go:194] Starting extracting preloaded images to volume ...
	I1126 20:22:05.578282  271308 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21974-10722/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-178152:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir
	I1126 20:22:08.546269  271308 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21974-10722/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-178152:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir: (2.967942667s)
	I1126 20:22:08.546299  271308 kic.go:203] duration metric: took 2.968075813s to extract preloaded images to volume ...
	W1126 20:22:08.546386  271308 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1126 20:22:08.546425  271308 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1126 20:22:08.546499  271308 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1126 20:22:08.611613  271308 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-178152 --name default-k8s-diff-port-178152 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-178152 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-178152 --network default-k8s-diff-port-178152 --ip 192.168.85.2 --volume default-k8s-diff-port-178152:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1126 20:22:08.980723  271308 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-178152 --format={{.State.Running}}
	I1126 20:22:09.003717  271308 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-178152 --format={{.State.Status}}
	I1126 20:22:09.021327  271308 cli_runner.go:164] Run: docker exec default-k8s-diff-port-178152 stat /var/lib/dpkg/alternatives/iptables
	I1126 20:22:09.071731  271308 oci.go:144] the created container "default-k8s-diff-port-178152" has a running status.
	I1126 20:22:09.071768  271308 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21974-10722/.minikube/machines/default-k8s-diff-port-178152/id_rsa...
	I1126 20:22:09.107238  271308 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21974-10722/.minikube/machines/default-k8s-diff-port-178152/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1126 20:22:09.141646  271308 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-178152 --format={{.State.Status}}
	I1126 20:22:09.163343  271308 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1126 20:22:09.163363  271308 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-178152 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1126 20:22:09.210974  271308 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-178152 --format={{.State.Status}}
	I1126 20:22:09.235839  271308 machine.go:94] provisionDockerMachine start ...
	I1126 20:22:09.235920  271308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-178152
	I1126 20:22:09.256630  271308 main.go:143] libmachine: Using SSH client type: native
	I1126 20:22:09.256857  271308 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1126 20:22:09.256864  271308 main.go:143] libmachine: About to run SSH command:
	hostname
	I1126 20:22:09.257590  271308 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47692->127.0.0.1:33073: read: connection reset by peer
	I1126 20:22:05.678431  271769 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1126 20:22:05.678722  271769 start.go:159] libmachine.API.Create for "newest-cni-297942" (driver="docker")
	I1126 20:22:05.678764  271769 client.go:173] LocalClient.Create starting
	I1126 20:22:05.678856  271769 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem
	I1126 20:22:05.678896  271769 main.go:143] libmachine: Decoding PEM data...
	I1126 20:22:05.678922  271769 main.go:143] libmachine: Parsing certificate...
	I1126 20:22:05.678995  271769 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem
	I1126 20:22:05.679022  271769 main.go:143] libmachine: Decoding PEM data...
	I1126 20:22:05.679039  271769 main.go:143] libmachine: Parsing certificate...
	I1126 20:22:05.679441  271769 cli_runner.go:164] Run: docker network inspect newest-cni-297942 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1126 20:22:05.699514  271769 cli_runner.go:211] docker network inspect newest-cni-297942 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1126 20:22:05.699580  271769 network_create.go:284] running [docker network inspect newest-cni-297942] to gather additional debugging logs...
	I1126 20:22:05.699602  271769 cli_runner.go:164] Run: docker network inspect newest-cni-297942
	W1126 20:22:05.722635  271769 cli_runner.go:211] docker network inspect newest-cni-297942 returned with exit code 1
	I1126 20:22:05.722671  271769 network_create.go:287] error running [docker network inspect newest-cni-297942]: docker network inspect newest-cni-297942: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-297942 not found
	I1126 20:22:05.722686  271769 network_create.go:289] output of [docker network inspect newest-cni-297942]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-297942 not found
	
	** /stderr **
	I1126 20:22:05.722798  271769 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1126 20:22:05.743162  271769 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9f2fbfec5d4b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:aa:65:0e:d3:0d:13} reservation:<nil>}
	I1126 20:22:05.743885  271769 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bbc6aaddce08 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3a:93:ea:3f:0d:7e} reservation:<nil>}
	I1126 20:22:05.744626  271769 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ecd673f5b2f2 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:de:d0:57:1a:06:91} reservation:<nil>}
	I1126 20:22:05.745129  271769 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-2ae6f13df7ae IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ba:83:a1:96:dc:99} reservation:<nil>}
	I1126 20:22:05.745800  271769 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-ec68256d4118 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:72:5d:f9:71:de:9b} reservation:<nil>}
	I1126 20:22:05.746353  271769 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-7fd9c7914891 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:ee:1d:e0:51:23:a7} reservation:<nil>}
	I1126 20:22:05.747164  271769 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f4cf80}
	I1126 20:22:05.747188  271769 network_create.go:124] attempt to create docker network newest-cni-297942 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1126 20:22:05.747246  271769 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-297942 newest-cni-297942
	I1126 20:22:05.801243  271769 network_create.go:108] docker network newest-cni-297942 192.168.103.0/24 created
	I1126 20:22:05.801281  271769 kic.go:121] calculated static IP "192.168.103.2" for the "newest-cni-297942" container
	I1126 20:22:05.801353  271769 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1126 20:22:05.822765  271769 cli_runner.go:164] Run: docker volume create newest-cni-297942 --label name.minikube.sigs.k8s.io=newest-cni-297942 --label created_by.minikube.sigs.k8s.io=true
	I1126 20:22:05.842825  271769 oci.go:103] Successfully created a docker volume newest-cni-297942
	I1126 20:22:05.842924  271769 cli_runner.go:164] Run: docker run --rm --name newest-cni-297942-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-297942 --entrypoint /usr/bin/test -v newest-cni-297942:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1126 20:22:07.127932  271769 cli_runner.go:217] Completed: docker run --rm --name newest-cni-297942-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-297942 --entrypoint /usr/bin/test -v newest-cni-297942:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib: (1.284967347s)
	I1126 20:22:07.127968  271769 oci.go:107] Successfully prepared a docker volume newest-cni-297942
	I1126 20:22:07.128030  271769 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:22:07.128045  271769 kic.go:194] Starting extracting preloaded images to volume ...
	I1126 20:22:07.128100  271769 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21974-10722/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-297942:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir
	
	
	==> CRI-O <==
	Nov 26 20:22:02 no-preload-026579 crio[770]: time="2025-11-26T20:22:02.232659646Z" level=info msg="Starting container: 4045729e5fd4602961a76810c99e92e77bcdb059b10b0c1fb5208144bd301827" id=da46cfa1-5cdf-44b1-9145-1a54b6cb554f name=/runtime.v1.RuntimeService/StartContainer
	Nov 26 20:22:02 no-preload-026579 crio[770]: time="2025-11-26T20:22:02.2381528Z" level=info msg="Started container" PID=2880 containerID=4045729e5fd4602961a76810c99e92e77bcdb059b10b0c1fb5208144bd301827 description=kube-system/coredns-66bc5c9577-wl4xp/coredns id=da46cfa1-5cdf-44b1-9145-1a54b6cb554f name=/runtime.v1.RuntimeService/StartContainer sandboxID=9f0e86562b2adb82b2c0fd373944c8a7511f09ebdf48338251ffa5eb91b99f73
	Nov 26 20:22:05 no-preload-026579 crio[770]: time="2025-11-26T20:22:05.197707647Z" level=info msg="Running pod sandbox: default/busybox/POD" id=5a519916-7bea-4deb-b4ec-4256d2004ed4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 26 20:22:05 no-preload-026579 crio[770]: time="2025-11-26T20:22:05.19781053Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:22:05 no-preload-026579 crio[770]: time="2025-11-26T20:22:05.204688984Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:9a88201a59cba5080d0cb66462245c960334c6be13be411ed0410e28619099c3 UID:4e7644bf-6a7c-407a-bcef-89fd47b6b2d5 NetNS:/var/run/netns/df747765-338a-46ef-acc6-9e9003bc7751 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0004ae7d0}] Aliases:map[]}"
	Nov 26 20:22:05 no-preload-026579 crio[770]: time="2025-11-26T20:22:05.204722371Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 26 20:22:05 no-preload-026579 crio[770]: time="2025-11-26T20:22:05.216278657Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:9a88201a59cba5080d0cb66462245c960334c6be13be411ed0410e28619099c3 UID:4e7644bf-6a7c-407a-bcef-89fd47b6b2d5 NetNS:/var/run/netns/df747765-338a-46ef-acc6-9e9003bc7751 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0004ae7d0}] Aliases:map[]}"
	Nov 26 20:22:05 no-preload-026579 crio[770]: time="2025-11-26T20:22:05.216556343Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 26 20:22:05 no-preload-026579 crio[770]: time="2025-11-26T20:22:05.217822937Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 26 20:22:05 no-preload-026579 crio[770]: time="2025-11-26T20:22:05.218673022Z" level=info msg="Ran pod sandbox 9a88201a59cba5080d0cb66462245c960334c6be13be411ed0410e28619099c3 with infra container: default/busybox/POD" id=5a519916-7bea-4deb-b4ec-4256d2004ed4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 26 20:22:05 no-preload-026579 crio[770]: time="2025-11-26T20:22:05.220001745Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=01ac456d-b8e6-4c97-b4ab-dca2f113e964 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:22:05 no-preload-026579 crio[770]: time="2025-11-26T20:22:05.220155915Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=01ac456d-b8e6-4c97-b4ab-dca2f113e964 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:22:05 no-preload-026579 crio[770]: time="2025-11-26T20:22:05.220208867Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=01ac456d-b8e6-4c97-b4ab-dca2f113e964 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:22:05 no-preload-026579 crio[770]: time="2025-11-26T20:22:05.220865874Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=72319c37-a064-487f-a978-9a3e1887ec20 name=/runtime.v1.ImageService/PullImage
	Nov 26 20:22:05 no-preload-026579 crio[770]: time="2025-11-26T20:22:05.224091474Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 26 20:22:06 no-preload-026579 crio[770]: time="2025-11-26T20:22:06.124901524Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=72319c37-a064-487f-a978-9a3e1887ec20 name=/runtime.v1.ImageService/PullImage
	Nov 26 20:22:06 no-preload-026579 crio[770]: time="2025-11-26T20:22:06.125540489Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5ac586f3-3a10-460c-9437-76a81dfd9755 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:22:06 no-preload-026579 crio[770]: time="2025-11-26T20:22:06.127007134Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f0ffcfb5-90c0-4d8c-91be-03bcba727791 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:22:06 no-preload-026579 crio[770]: time="2025-11-26T20:22:06.13144784Z" level=info msg="Creating container: default/busybox/busybox" id=a829ed5f-7d7a-4cf1-8ce0-0607e64f3ac4 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:22:06 no-preload-026579 crio[770]: time="2025-11-26T20:22:06.131618631Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:22:06 no-preload-026579 crio[770]: time="2025-11-26T20:22:06.136488279Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:22:06 no-preload-026579 crio[770]: time="2025-11-26T20:22:06.136947137Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:22:06 no-preload-026579 crio[770]: time="2025-11-26T20:22:06.187260871Z" level=info msg="Created container 02e32e71ecbdc4db5dec1e83e690ba4fe3104b2c9ba6f663d6f51216f03be82a: default/busybox/busybox" id=a829ed5f-7d7a-4cf1-8ce0-0607e64f3ac4 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:22:06 no-preload-026579 crio[770]: time="2025-11-26T20:22:06.187991302Z" level=info msg="Starting container: 02e32e71ecbdc4db5dec1e83e690ba4fe3104b2c9ba6f663d6f51216f03be82a" id=d69d9c85-c2d6-4f78-962b-1eda9b416dee name=/runtime.v1.RuntimeService/StartContainer
	Nov 26 20:22:06 no-preload-026579 crio[770]: time="2025-11-26T20:22:06.190222045Z" level=info msg="Started container" PID=2953 containerID=02e32e71ecbdc4db5dec1e83e690ba4fe3104b2c9ba6f663d6f51216f03be82a description=default/busybox/busybox id=d69d9c85-c2d6-4f78-962b-1eda9b416dee name=/runtime.v1.RuntimeService/StartContainer sandboxID=9a88201a59cba5080d0cb66462245c960334c6be13be411ed0410e28619099c3
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	02e32e71ecbdc       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   9a88201a59cba       busybox                                     default
	4045729e5fd46       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 seconds ago      Running             coredns                   0                   9f0e86562b2ad       coredns-66bc5c9577-wl4xp                    kube-system
	3af89f0346b29       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   7d2c3b13a0733       storage-provisioner                         kube-system
	3f58b31c458a4       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    23 seconds ago      Running             kindnet-cni               0                   dff81d4ef7464       kindnet-8rfpj                               kube-system
	a97b5a74ad48d       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      25 seconds ago      Running             kube-proxy                0                   51c7c6897e089       kube-proxy-ktbwp                            kube-system
	1b1d11494334f       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      35 seconds ago      Running             kube-controller-manager   0                   52e6f40a21e3c       kube-controller-manager-no-preload-026579   kube-system
	4a44ca2397ca4       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      35 seconds ago      Running             kube-apiserver            0                   47f6057c8013a       kube-apiserver-no-preload-026579            kube-system
	11abef718b3de       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      35 seconds ago      Running             etcd                      0                   86d4888e7a2e7       etcd-no-preload-026579                      kube-system
	e136b4086130b       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      35 seconds ago      Running             kube-scheduler            0                   ef912469c9687       kube-scheduler-no-preload-026579            kube-system
	
	
	==> coredns [4045729e5fd4602961a76810c99e92e77bcdb059b10b0c1fb5208144bd301827] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39051 - 54612 "HINFO IN 5374101310179108084.96743417078441543. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.077908328s
	
	
	==> describe nodes <==
	Name:               no-preload-026579
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-026579
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970
	                    minikube.k8s.io/name=no-preload-026579
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_26T20_21_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 26 Nov 2025 20:21:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-026579
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 26 Nov 2025 20:22:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 26 Nov 2025 20:22:13 +0000   Wed, 26 Nov 2025 20:21:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 26 Nov 2025 20:22:13 +0000   Wed, 26 Nov 2025 20:21:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 26 Nov 2025 20:22:13 +0000   Wed, 26 Nov 2025 20:21:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 26 Nov 2025 20:22:13 +0000   Wed, 26 Nov 2025 20:22:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-026579
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                896379d3-12e9-47c2-b887-9f21dde83abe
	  Boot ID:                    4ab83601-3d95-4490-96bc-14b416ba2714
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-wl4xp                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-no-preload-026579                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-8rfpj                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-no-preload-026579             250m (3%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-no-preload-026579    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-ktbwp                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-no-preload-026579             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 25s                kube-proxy       
	  Normal  Starting                 36s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  36s (x8 over 36s)  kubelet          Node no-preload-026579 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    36s (x8 over 36s)  kubelet          Node no-preload-026579 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     36s (x8 over 36s)  kubelet          Node no-preload-026579 status is now: NodeHasSufficientPID
	  Normal  Starting                 32s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  32s                kubelet          Node no-preload-026579 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    32s                kubelet          Node no-preload-026579 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     32s                kubelet          Node no-preload-026579 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s                node-controller  Node no-preload-026579 event: Registered Node no-preload-026579 in Controller
	  Normal  NodeReady                13s                kubelet          Node no-preload-026579 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.091832] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023778] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.010162] kauditd_printk_skb: 47 callbacks suppressed
	[Nov26 19:37] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.011177] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.022896] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.024869] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.022915] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.023864] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +2.047793] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +4.032568] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +8.126198] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[ +16.382331] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[Nov26 19:38] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	
	
	==> etcd [11abef718b3de7b038eb737092b00608ae7187506ee106f6d8dcfbdb8da62c5d] <==
	{"level":"warn","ts":"2025-11-26T20:21:39.691355Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:21:39.700301Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:21:39.707990Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:21:39.715059Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:21:39.723023Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:21:39.729267Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:21:39.736871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:21:39.743529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:21:39.751337Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:21:39.758631Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:21:39.764800Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:21:39.772542Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:21:39.778436Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:21:39.790563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:21:39.797596Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:21:39.804601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:21:39.810534Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:21:39.816358Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:21:39.823003Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:21:39.829813Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:21:39.836359Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:21:39.850741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:21:39.856528Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:21:39.862851Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:21:39.912814Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47438","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:22:14 up  1:04,  0 user,  load average: 2.65, 2.86, 1.94
	Linux no-preload-026579 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3f58b31c458a496c535d021fb049982c76456799e75fbd8f1cbdf65447ce434f] <==
	I1126 20:21:51.141512       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1126 20:21:51.141802       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1126 20:21:51.141947       1 main.go:148] setting mtu 1500 for CNI 
	I1126 20:21:51.141968       1 main.go:178] kindnetd IP family: "ipv4"
	I1126 20:21:51.141992       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-26T20:21:51Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1126 20:21:51.343799       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1126 20:21:51.343873       1 controller.go:381] "Waiting for informer caches to sync"
	I1126 20:21:51.343888       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1126 20:21:51.344400       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1126 20:21:51.844281       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1126 20:21:51.844307       1 metrics.go:72] Registering metrics
	I1126 20:21:51.844380       1 controller.go:711] "Syncing nftables rules"
	I1126 20:22:01.346600       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1126 20:22:01.346669       1 main.go:301] handling current node
	I1126 20:22:11.346562       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1126 20:22:11.346628       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4a44ca2397ca447895a436cde011607dc103a7b4a2e8ce6428b98c165b111483] <==
	I1126 20:21:40.393959       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1126 20:21:40.394037       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1126 20:21:40.395903       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1126 20:21:40.396707       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1126 20:21:40.403012       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1126 20:21:40.403165       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1126 20:21:40.591793       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1126 20:21:41.295533       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1126 20:21:41.299600       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1126 20:21:41.299618       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1126 20:21:41.764143       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1126 20:21:41.806316       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1126 20:21:41.898876       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1126 20:21:41.904390       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1126 20:21:41.905271       1 controller.go:667] quota admission added evaluator for: endpoints
	I1126 20:21:41.909200       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1126 20:21:42.319070       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1126 20:21:42.815141       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1126 20:21:42.822782       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1126 20:21:42.828226       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1126 20:21:47.373445       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1126 20:21:47.376814       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1126 20:21:48.375206       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1126 20:21:48.429677       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1126 20:22:12.983019       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:53428: use of closed network connection
	
	
	==> kube-controller-manager [1b1d11494334f734a5b9c0e3b7e0da523f5bb5bf9e2638ee6048339bad66a0c5] <==
	I1126 20:21:47.318375       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1126 20:21:47.318594       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1126 20:21:47.318884       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1126 20:21:47.318974       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1126 20:21:47.319044       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1126 20:21:47.319161       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-026579"
	I1126 20:21:47.319213       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1126 20:21:47.319688       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1126 20:21:47.319708       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1126 20:21:47.319735       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1126 20:21:47.319762       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1126 20:21:47.319859       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1126 20:21:47.320190       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1126 20:21:47.320385       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1126 20:21:47.323785       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1126 20:21:47.324882       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1126 20:21:47.324948       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1126 20:21:47.325019       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1126 20:21:47.325031       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1126 20:21:47.325038       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1126 20:21:47.329138       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1126 20:21:47.330446       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-026579" podCIDRs=["10.244.0.0/24"]
	I1126 20:21:47.340441       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1126 20:21:47.344687       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1126 20:22:02.320921       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [a97b5a74ad48dfa82da2357c4f7d8b49c762aaadb32b3a8756ddf3252585e4df] <==
	I1126 20:21:48.852997       1 server_linux.go:53] "Using iptables proxy"
	I1126 20:21:48.911630       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1126 20:21:49.012475       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1126 20:21:49.012526       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1126 20:21:49.012663       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1126 20:21:49.030499       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1126 20:21:49.030558       1 server_linux.go:132] "Using iptables Proxier"
	I1126 20:21:49.036006       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1126 20:21:49.036380       1 server.go:527] "Version info" version="v1.34.1"
	I1126 20:21:49.036405       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 20:21:49.039811       1 config.go:200] "Starting service config controller"
	I1126 20:21:49.039828       1 config.go:106] "Starting endpoint slice config controller"
	I1126 20:21:49.039843       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1126 20:21:49.039856       1 config.go:403] "Starting serviceCIDR config controller"
	I1126 20:21:49.039831       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1126 20:21:49.039869       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1126 20:21:49.039971       1 config.go:309] "Starting node config controller"
	I1126 20:21:49.039977       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1126 20:21:49.039983       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1126 20:21:49.140395       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1126 20:21:49.140411       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1126 20:21:49.140440       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [e136b4086130bc5179f3137017a048201614573b96fb0c8c4dde9cf3da79b5d3] <==
	E1126 20:21:40.347576       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1126 20:21:40.347446       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1126 20:21:40.347481       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1126 20:21:40.347496       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1126 20:21:40.347518       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1126 20:21:40.347721       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1126 20:21:40.347578       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1126 20:21:40.347396       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1126 20:21:40.347531       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1126 20:21:40.347784       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1126 20:21:40.347817       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1126 20:21:40.347881       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1126 20:21:41.181140       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1126 20:21:41.199341       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1126 20:21:41.203298       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1126 20:21:41.225316       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1126 20:21:41.337049       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1126 20:21:41.438139       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1126 20:21:41.443094       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1126 20:21:41.464314       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1126 20:21:41.515470       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1126 20:21:41.545545       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1126 20:21:41.550556       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1126 20:21:41.592760       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	I1126 20:21:44.444814       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 26 20:21:43 no-preload-026579 kubelet[2272]: I1126 20:21:43.729700    2272 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-no-preload-026579" podStartSLOduration=2.729683913 podStartE2EDuration="2.729683913s" podCreationTimestamp="2025-11-26 20:21:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 20:21:43.72914282 +0000 UTC m=+1.129255528" watchObservedRunningTime="2025-11-26 20:21:43.729683913 +0000 UTC m=+1.129796621"
	Nov 26 20:21:43 no-preload-026579 kubelet[2272]: I1126 20:21:43.746271    2272 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-no-preload-026579" podStartSLOduration=1.7462573369999999 podStartE2EDuration="1.746257337s" podCreationTimestamp="2025-11-26 20:21:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 20:21:43.739063825 +0000 UTC m=+1.139176532" watchObservedRunningTime="2025-11-26 20:21:43.746257337 +0000 UTC m=+1.146370042"
	Nov 26 20:21:43 no-preload-026579 kubelet[2272]: I1126 20:21:43.746342    2272 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-no-preload-026579" podStartSLOduration=1.746338605 podStartE2EDuration="1.746338605s" podCreationTimestamp="2025-11-26 20:21:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 20:21:43.746333149 +0000 UTC m=+1.146445845" watchObservedRunningTime="2025-11-26 20:21:43.746338605 +0000 UTC m=+1.146451314"
	Nov 26 20:21:43 no-preload-026579 kubelet[2272]: I1126 20:21:43.753670    2272 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-no-preload-026579" podStartSLOduration=1.753659142 podStartE2EDuration="1.753659142s" podCreationTimestamp="2025-11-26 20:21:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 20:21:43.753398799 +0000 UTC m=+1.153511519" watchObservedRunningTime="2025-11-26 20:21:43.753659142 +0000 UTC m=+1.153771847"
	Nov 26 20:21:47 no-preload-026579 kubelet[2272]: I1126 20:21:47.404741    2272 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 26 20:21:47 no-preload-026579 kubelet[2272]: I1126 20:21:47.405434    2272 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 26 20:21:48 no-preload-026579 kubelet[2272]: I1126 20:21:48.509875    2272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/93566a91-b6dc-47fa-9d46-9ebf0fc4704a-kube-proxy\") pod \"kube-proxy-ktbwp\" (UID: \"93566a91-b6dc-47fa-9d46-9ebf0fc4704a\") " pod="kube-system/kube-proxy-ktbwp"
	Nov 26 20:21:48 no-preload-026579 kubelet[2272]: I1126 20:21:48.510395    2272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkl8k\" (UniqueName: \"kubernetes.io/projected/93566a91-b6dc-47fa-9d46-9ebf0fc4704a-kube-api-access-wkl8k\") pod \"kube-proxy-ktbwp\" (UID: \"93566a91-b6dc-47fa-9d46-9ebf0fc4704a\") " pod="kube-system/kube-proxy-ktbwp"
	Nov 26 20:21:48 no-preload-026579 kubelet[2272]: I1126 20:21:48.510444    2272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6g66\" (UniqueName: \"kubernetes.io/projected/09d32618-edb6-49f0-b9ce-af0f0751b53f-kube-api-access-j6g66\") pod \"kindnet-8rfpj\" (UID: \"09d32618-edb6-49f0-b9ce-af0f0751b53f\") " pod="kube-system/kindnet-8rfpj"
	Nov 26 20:21:48 no-preload-026579 kubelet[2272]: I1126 20:21:48.510518    2272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/93566a91-b6dc-47fa-9d46-9ebf0fc4704a-lib-modules\") pod \"kube-proxy-ktbwp\" (UID: \"93566a91-b6dc-47fa-9d46-9ebf0fc4704a\") " pod="kube-system/kube-proxy-ktbwp"
	Nov 26 20:21:48 no-preload-026579 kubelet[2272]: I1126 20:21:48.510547    2272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/09d32618-edb6-49f0-b9ce-af0f0751b53f-cni-cfg\") pod \"kindnet-8rfpj\" (UID: \"09d32618-edb6-49f0-b9ce-af0f0751b53f\") " pod="kube-system/kindnet-8rfpj"
	Nov 26 20:21:48 no-preload-026579 kubelet[2272]: I1126 20:21:48.510576    2272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/09d32618-edb6-49f0-b9ce-af0f0751b53f-lib-modules\") pod \"kindnet-8rfpj\" (UID: \"09d32618-edb6-49f0-b9ce-af0f0751b53f\") " pod="kube-system/kindnet-8rfpj"
	Nov 26 20:21:48 no-preload-026579 kubelet[2272]: I1126 20:21:48.510602    2272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/93566a91-b6dc-47fa-9d46-9ebf0fc4704a-xtables-lock\") pod \"kube-proxy-ktbwp\" (UID: \"93566a91-b6dc-47fa-9d46-9ebf0fc4704a\") " pod="kube-system/kube-proxy-ktbwp"
	Nov 26 20:21:48 no-preload-026579 kubelet[2272]: I1126 20:21:48.510622    2272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/09d32618-edb6-49f0-b9ce-af0f0751b53f-xtables-lock\") pod \"kindnet-8rfpj\" (UID: \"09d32618-edb6-49f0-b9ce-af0f0751b53f\") " pod="kube-system/kindnet-8rfpj"
	Nov 26 20:21:49 no-preload-026579 kubelet[2272]: I1126 20:21:49.744725    2272 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ktbwp" podStartSLOduration=1.744702897 podStartE2EDuration="1.744702897s" podCreationTimestamp="2025-11-26 20:21:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 20:21:49.7434277 +0000 UTC m=+7.143540407" watchObservedRunningTime="2025-11-26 20:21:49.744702897 +0000 UTC m=+7.144815605"
	Nov 26 20:21:51 no-preload-026579 kubelet[2272]: I1126 20:21:51.735593    2272 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-8rfpj" podStartSLOduration=1.521408112 podStartE2EDuration="3.735575035s" podCreationTimestamp="2025-11-26 20:21:48 +0000 UTC" firstStartedPulling="2025-11-26 20:21:48.740300883 +0000 UTC m=+6.140413582" lastFinishedPulling="2025-11-26 20:21:50.954467804 +0000 UTC m=+8.354580505" observedRunningTime="2025-11-26 20:21:51.735267614 +0000 UTC m=+9.135380321" watchObservedRunningTime="2025-11-26 20:21:51.735575035 +0000 UTC m=+9.135687742"
	Nov 26 20:22:01 no-preload-026579 kubelet[2272]: I1126 20:22:01.844944    2272 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 26 20:22:01 no-preload-026579 kubelet[2272]: I1126 20:22:01.907388    2272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e2f8aa92-297b-4be4-a3a2-45a956763aad-tmp\") pod \"storage-provisioner\" (UID: \"e2f8aa92-297b-4be4-a3a2-45a956763aad\") " pod="kube-system/storage-provisioner"
	Nov 26 20:22:01 no-preload-026579 kubelet[2272]: I1126 20:22:01.907441    2272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wdgx\" (UniqueName: \"kubernetes.io/projected/e2f8aa92-297b-4be4-a3a2-45a956763aad-kube-api-access-2wdgx\") pod \"storage-provisioner\" (UID: \"e2f8aa92-297b-4be4-a3a2-45a956763aad\") " pod="kube-system/storage-provisioner"
	Nov 26 20:22:01 no-preload-026579 kubelet[2272]: I1126 20:22:01.907518    2272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e1cf9739-1b9a-44d7-a932-447ac94e142d-config-volume\") pod \"coredns-66bc5c9577-wl4xp\" (UID: \"e1cf9739-1b9a-44d7-a932-447ac94e142d\") " pod="kube-system/coredns-66bc5c9577-wl4xp"
	Nov 26 20:22:01 no-preload-026579 kubelet[2272]: I1126 20:22:01.907553    2272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwfjk\" (UniqueName: \"kubernetes.io/projected/e1cf9739-1b9a-44d7-a932-447ac94e142d-kube-api-access-pwfjk\") pod \"coredns-66bc5c9577-wl4xp\" (UID: \"e1cf9739-1b9a-44d7-a932-447ac94e142d\") " pod="kube-system/coredns-66bc5c9577-wl4xp"
	Nov 26 20:22:02 no-preload-026579 kubelet[2272]: I1126 20:22:02.760951    2272 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.760929432 podStartE2EDuration="14.760929432s" podCreationTimestamp="2025-11-26 20:21:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 20:22:02.760562934 +0000 UTC m=+20.160675641" watchObservedRunningTime="2025-11-26 20:22:02.760929432 +0000 UTC m=+20.161042139"
	Nov 26 20:22:04 no-preload-026579 kubelet[2272]: I1126 20:22:04.887059    2272 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-wl4xp" podStartSLOduration=16.887035877 podStartE2EDuration="16.887035877s" podCreationTimestamp="2025-11-26 20:21:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 20:22:02.771057775 +0000 UTC m=+20.171170483" watchObservedRunningTime="2025-11-26 20:22:04.887035877 +0000 UTC m=+22.287148584"
	Nov 26 20:22:04 no-preload-026579 kubelet[2272]: I1126 20:22:04.924144    2272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vg9vg\" (UniqueName: \"kubernetes.io/projected/4e7644bf-6a7c-407a-bcef-89fd47b6b2d5-kube-api-access-vg9vg\") pod \"busybox\" (UID: \"4e7644bf-6a7c-407a-bcef-89fd47b6b2d5\") " pod="default/busybox"
	Nov 26 20:22:06 no-preload-026579 kubelet[2272]: I1126 20:22:06.846168    2272 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.940195504 podStartE2EDuration="2.846144469s" podCreationTimestamp="2025-11-26 20:22:04 +0000 UTC" firstStartedPulling="2025-11-26 20:22:05.220440869 +0000 UTC m=+22.620553761" lastFinishedPulling="2025-11-26 20:22:06.126390018 +0000 UTC m=+23.526502726" observedRunningTime="2025-11-26 20:22:06.845763195 +0000 UTC m=+24.245875902" watchObservedRunningTime="2025-11-26 20:22:06.846144469 +0000 UTC m=+24.246257178"
	
	
	==> storage-provisioner [3af89f0346b293b7e8459a56a4ce18500ae0869958f36d62e35db34a76093a3b] <==
	I1126 20:22:02.248807       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1126 20:22:02.259781       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1126 20:22:02.259913       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1126 20:22:02.264277       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:22:02.282737       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1126 20:22:02.282988       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1126 20:22:02.283284       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-026579_e8593a19-deb9-4630-9742-fac04c4e4256!
	I1126 20:22:02.284134       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"001ce068-24a6-4540-989a-014660d8c6e6", APIVersion:"v1", ResourceVersion:"446", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-026579_e8593a19-deb9-4630-9742-fac04c4e4256 became leader
	W1126 20:22:02.291450       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:22:02.307673       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1126 20:22:02.383959       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-026579_e8593a19-deb9-4630-9742-fac04c4e4256!
	W1126 20:22:04.311300       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:22:04.315397       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:22:06.341253       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:22:06.382726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:22:08.385986       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:22:08.484547       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:22:10.488306       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:22:10.552786       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:22:12.556200       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:22:12.560998       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:22:14.564782       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:22:14.569050       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-026579 -n no-preload-026579
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-026579 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-949294 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-949294 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (255.500506ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:22:17Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-949294 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-949294 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-949294 describe deploy/metrics-server -n kube-system: exit status 1 (65.02921ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-949294 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-949294
helpers_test.go:243: (dbg) docker inspect embed-certs-949294:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "86fea694f6d23eb28c927f14e9521ffcaeb05561b1a903e3154464f7a4ba4430",
	        "Created": "2025-11-26T20:21:31.21255744Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 262852,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-26T20:21:31.405303563Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/86fea694f6d23eb28c927f14e9521ffcaeb05561b1a903e3154464f7a4ba4430/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/86fea694f6d23eb28c927f14e9521ffcaeb05561b1a903e3154464f7a4ba4430/hostname",
	        "HostsPath": "/var/lib/docker/containers/86fea694f6d23eb28c927f14e9521ffcaeb05561b1a903e3154464f7a4ba4430/hosts",
	        "LogPath": "/var/lib/docker/containers/86fea694f6d23eb28c927f14e9521ffcaeb05561b1a903e3154464f7a4ba4430/86fea694f6d23eb28c927f14e9521ffcaeb05561b1a903e3154464f7a4ba4430-json.log",
	        "Name": "/embed-certs-949294",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-949294:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-949294",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "86fea694f6d23eb28c927f14e9521ffcaeb05561b1a903e3154464f7a4ba4430",
	                "LowerDir": "/var/lib/docker/overlay2/17c74d8eb31c82401cf3f989170353a9316d246a288a8fc84a26c55d35f4bff9-init/diff:/var/lib/docker/overlay2/1661d775281cb7314a654558ee2c4e5880ffafb3a27a085032449cd60a68753a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/17c74d8eb31c82401cf3f989170353a9316d246a288a8fc84a26c55d35f4bff9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/17c74d8eb31c82401cf3f989170353a9316d246a288a8fc84a26c55d35f4bff9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/17c74d8eb31c82401cf3f989170353a9316d246a288a8fc84a26c55d35f4bff9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-949294",
	                "Source": "/var/lib/docker/volumes/embed-certs-949294/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-949294",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-949294",
	                "name.minikube.sigs.k8s.io": "embed-certs-949294",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "275c8a10a11585d547f799683546cbf0b73ae849c7de9e2fbc949bb025f91e19",
	            "SandboxKey": "/var/run/docker/netns/275c8a10a115",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-949294": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7fd9c7914891185e47dacdba5bd1d1c0b9a651e39050d7a01ee422b067e5fad7",
	                    "EndpointID": "9899d9fd4cda55621ccbf3a4d4a9fcb24ef400dea1600d844fdb7cc6222da045",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "b6:11:eb:6a:a5:01",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-949294",
	                        "86fea694f6d2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-949294 -n embed-certs-949294
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-949294 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-949294 logs -n 25: (1.014736286s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cert-options-706331 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-706331          │ jenkins │ v1.37.0 │ 26 Nov 25 20:19 UTC │ 26 Nov 25 20:19 UTC │
	│ delete  │ -p cert-options-706331                                                                                                                                                                                                                        │ cert-options-706331          │ jenkins │ v1.37.0 │ 26 Nov 25 20:19 UTC │ 26 Nov 25 20:19 UTC │
	│ start   │ -p old-k8s-version-157431 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-157431       │ jenkins │ v1.37.0 │ 26 Nov 25 20:19 UTC │ 26 Nov 25 20:19 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-157431 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-157431       │ jenkins │ v1.37.0 │ 26 Nov 25 20:20 UTC │                     │
	│ stop    │ -p old-k8s-version-157431 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-157431       │ jenkins │ v1.37.0 │ 26 Nov 25 20:20 UTC │ 26 Nov 25 20:20 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-157431 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-157431       │ jenkins │ v1.37.0 │ 26 Nov 25 20:20 UTC │ 26 Nov 25 20:20 UTC │
	│ start   │ -p old-k8s-version-157431 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-157431       │ jenkins │ v1.37.0 │ 26 Nov 25 20:20 UTC │ 26 Nov 25 20:20 UTC │
	│ image   │ old-k8s-version-157431 image list --format=json                                                                                                                                                                                               │ old-k8s-version-157431       │ jenkins │ v1.37.0 │ 26 Nov 25 20:21 UTC │ 26 Nov 25 20:21 UTC │
	│ pause   │ -p old-k8s-version-157431 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-157431       │ jenkins │ v1.37.0 │ 26 Nov 25 20:21 UTC │                     │
	│ delete  │ -p old-k8s-version-157431                                                                                                                                                                                                                     │ old-k8s-version-157431       │ jenkins │ v1.37.0 │ 26 Nov 25 20:21 UTC │ 26 Nov 25 20:21 UTC │
	│ delete  │ -p old-k8s-version-157431                                                                                                                                                                                                                     │ old-k8s-version-157431       │ jenkins │ v1.37.0 │ 26 Nov 25 20:21 UTC │ 26 Nov 25 20:21 UTC │
	│ start   │ -p no-preload-026579 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-026579            │ jenkins │ v1.37.0 │ 26 Nov 25 20:21 UTC │ 26 Nov 25 20:22 UTC │
	│ start   │ -p cert-expiration-571738 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-571738       │ jenkins │ v1.37.0 │ 26 Nov 25 20:21 UTC │ 26 Nov 25 20:21 UTC │
	│ delete  │ -p cert-expiration-571738                                                                                                                                                                                                                     │ cert-expiration-571738       │ jenkins │ v1.37.0 │ 26 Nov 25 20:21 UTC │ 26 Nov 25 20:21 UTC │
	│ start   │ -p embed-certs-949294 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-949294           │ jenkins │ v1.37.0 │ 26 Nov 25 20:21 UTC │ 26 Nov 25 20:22 UTC │
	│ start   │ -p kubernetes-upgrade-225144 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-225144    │ jenkins │ v1.37.0 │ 26 Nov 25 20:21 UTC │                     │
	│ start   │ -p kubernetes-upgrade-225144 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-225144    │ jenkins │ v1.37.0 │ 26 Nov 25 20:21 UTC │ 26 Nov 25 20:22 UTC │
	│ delete  │ -p stopped-upgrade-211103                                                                                                                                                                                                                     │ stopped-upgrade-211103       │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ delete  │ -p kubernetes-upgrade-225144                                                                                                                                                                                                                  │ kubernetes-upgrade-225144    │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ delete  │ -p disable-driver-mounts-221304                                                                                                                                                                                                               │ disable-driver-mounts-221304 │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ start   │ -p default-k8s-diff-port-178152 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-178152 │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │                     │
	│ start   │ -p newest-cni-297942 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-297942            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-026579 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-026579            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │                     │
	│ stop    │ -p no-preload-026579 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-026579            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-949294 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-949294           │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/26 20:22:05
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1126 20:22:05.470014  271769 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:22:05.470138  271769 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:22:05.470150  271769 out.go:374] Setting ErrFile to fd 2...
	I1126 20:22:05.470157  271769 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:22:05.470439  271769 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-10722/.minikube/bin
	I1126 20:22:05.471066  271769 out.go:368] Setting JSON to false
	I1126 20:22:05.472553  271769 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3875,"bootTime":1764184650,"procs":374,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1126 20:22:05.472612  271769 start.go:143] virtualization: kvm guest
	I1126 20:22:05.474366  271769 out.go:179] * [newest-cni-297942] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1126 20:22:05.476213  271769 out.go:179]   - MINIKUBE_LOCATION=21974
	I1126 20:22:05.476251  271769 notify.go:221] Checking for updates...
	I1126 20:22:05.478287  271769 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1126 20:22:05.479637  271769 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21974-10722/kubeconfig
	I1126 20:22:05.480781  271769 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-10722/.minikube
	I1126 20:22:05.482026  271769 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1126 20:22:05.483310  271769 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1126 20:22:05.485154  271769 config.go:182] Loaded profile config "default-k8s-diff-port-178152": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:22:05.485292  271769 config.go:182] Loaded profile config "embed-certs-949294": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:22:05.485421  271769 config.go:182] Loaded profile config "no-preload-026579": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:22:05.485567  271769 driver.go:422] Setting default libvirt URI to qemu:///system
	I1126 20:22:05.514350  271769 docker.go:124] docker version: linux-29.0.4:Docker Engine - Community
	I1126 20:22:05.514516  271769 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:22:05.577974  271769 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:55 OomKillDisable:false NGoroutines:71 SystemTime:2025-11-26 20:22:05.566964083 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1126 20:22:05.578082  271769 docker.go:319] overlay module found
	I1126 20:22:05.579912  271769 out.go:179] * Using the docker driver based on user configuration
	I1126 20:22:05.580962  271769 start.go:309] selected driver: docker
	I1126 20:22:05.580974  271769 start.go:927] validating driver "docker" against <nil>
	I1126 20:22:05.580984  271769 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1126 20:22:05.581559  271769 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:22:05.645595  271769 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:61 OomKillDisable:false NGoroutines:86 SystemTime:2025-11-26 20:22:05.634039842 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1126 20:22:05.645821  271769 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1126 20:22:05.645863  271769 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1126 20:22:05.646177  271769 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1126 20:22:05.648392  271769 out.go:179] * Using Docker driver with root privileges
	I1126 20:22:05.649430  271769 cni.go:84] Creating CNI manager for ""
	I1126 20:22:05.649513  271769 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:22:05.649528  271769 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1126 20:22:05.649592  271769 start.go:353] cluster config:
	{Name:newest-cni-297942 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-297942 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:22:05.650829  271769 out.go:179] * Starting "newest-cni-297942" primary control-plane node in "newest-cni-297942" cluster
	I1126 20:22:05.651898  271769 cache.go:134] Beginning downloading kic base image for docker with crio
	I1126 20:22:05.653019  271769 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1126 20:22:05.653979  271769 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:22:05.654006  271769 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21974-10722/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1126 20:22:05.654014  271769 cache.go:65] Caching tarball of preloaded images
	I1126 20:22:05.654069  271769 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1126 20:22:05.654092  271769 preload.go:238] Found /home/jenkins/minikube-integration/21974-10722/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1126 20:22:05.654105  271769 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1126 20:22:05.654204  271769 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/newest-cni-297942/config.json ...
	I1126 20:22:05.654223  271769 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/newest-cni-297942/config.json: {Name:mke134f9dc36b8353ce5e4cfc96424f05d910165 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:22:05.676290  271769 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1126 20:22:05.676312  271769 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1126 20:22:05.676329  271769 cache.go:243] Successfully downloaded all kic artifacts
	I1126 20:22:05.676359  271769 start.go:360] acquireMachinesLock for newest-cni-297942: {Name:mkec4aea2213ece57272965b7ad56143d17ef93a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1126 20:22:05.676452  271769 start.go:364] duration metric: took 70.692µs to acquireMachinesLock for "newest-cni-297942"
	I1126 20:22:05.676495  271769 start.go:93] Provisioning new machine with config: &{Name:newest-cni-297942 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-297942 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1126 20:22:05.676600  271769 start.go:125] createHost starting for "" (driver="docker")
	I1126 20:22:06.006640  261431 node_ready.go:49] node "embed-certs-949294" is "Ready"
	I1126 20:22:06.006675  261431 node_ready.go:38] duration metric: took 11.505308269s for node "embed-certs-949294" to be "Ready" ...
	I1126 20:22:06.006693  261431 api_server.go:52] waiting for apiserver process to appear ...
	I1126 20:22:06.006753  261431 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:22:06.026499  261431 api_server.go:72] duration metric: took 11.815858466s to wait for apiserver process to appear ...
	I1126 20:22:06.026533  261431 api_server.go:88] waiting for apiserver healthz status ...
	I1126 20:22:06.026553  261431 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1126 20:22:06.032814  261431 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1126 20:22:06.033944  261431 api_server.go:141] control plane version: v1.34.1
	I1126 20:22:06.033997  261431 api_server.go:131] duration metric: took 7.436748ms to wait for apiserver health ...
	I1126 20:22:06.034023  261431 system_pods.go:43] waiting for kube-system pods to appear ...
	I1126 20:22:06.037638  261431 system_pods.go:59] 8 kube-system pods found
	I1126 20:22:06.037665  261431 system_pods.go:61] "coredns-66bc5c9577-s8rrr" [fb1d5777-ea89-40cb-ae5c-dd8bde47f3de] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:22:06.037671  261431 system_pods.go:61] "etcd-embed-certs-949294" [ae44b779-edef-49c6-8b68-b6114e9a4d68] Running
	I1126 20:22:06.037676  261431 system_pods.go:61] "kindnet-9546l" [5f44d5e6-677c-4df7-9534-bfdf1e6b06b4] Running
	I1126 20:22:06.037679  261431 system_pods.go:61] "kube-apiserver-embed-certs-949294" [25d397fc-5140-407f-80f8-e05072c2d44b] Running
	I1126 20:22:06.037683  261431 system_pods.go:61] "kube-controller-manager-embed-certs-949294" [9f206785-00f8-4fb7-a52c-95ea02516271] Running
	I1126 20:22:06.037695  261431 system_pods.go:61] "kube-proxy-qnjvr" [d9dba8a9-9c13-46e2-9ada-a2b8daca8d73] Running
	I1126 20:22:06.037702  261431 system_pods.go:61] "kube-scheduler-embed-certs-949294" [26c4da52-2b0a-4a83-9549-de780ffeddb0] Running
	I1126 20:22:06.037704  261431 system_pods.go:61] "storage-provisioner" [ad12d5e5-d681-4dfc-9970-d2340ac55ed7] Pending
	I1126 20:22:06.037710  261431 system_pods.go:74] duration metric: took 3.674285ms to wait for pod list to return data ...
	I1126 20:22:06.037725  261431 default_sa.go:34] waiting for default service account to be created ...
	I1126 20:22:06.044500  261431 default_sa.go:45] found service account: "default"
	I1126 20:22:06.044522  261431 default_sa.go:55] duration metric: took 6.791361ms for default service account to be created ...
	I1126 20:22:06.044532  261431 system_pods.go:116] waiting for k8s-apps to be running ...
	I1126 20:22:06.047723  261431 system_pods.go:86] 8 kube-system pods found
	I1126 20:22:06.047754  261431 system_pods.go:89] "coredns-66bc5c9577-s8rrr" [fb1d5777-ea89-40cb-ae5c-dd8bde47f3de] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:22:06.047763  261431 system_pods.go:89] "etcd-embed-certs-949294" [ae44b779-edef-49c6-8b68-b6114e9a4d68] Running
	I1126 20:22:06.047772  261431 system_pods.go:89] "kindnet-9546l" [5f44d5e6-677c-4df7-9534-bfdf1e6b06b4] Running
	I1126 20:22:06.047781  261431 system_pods.go:89] "kube-apiserver-embed-certs-949294" [25d397fc-5140-407f-80f8-e05072c2d44b] Running
	I1126 20:22:06.047787  261431 system_pods.go:89] "kube-controller-manager-embed-certs-949294" [9f206785-00f8-4fb7-a52c-95ea02516271] Running
	I1126 20:22:06.047792  261431 system_pods.go:89] "kube-proxy-qnjvr" [d9dba8a9-9c13-46e2-9ada-a2b8daca8d73] Running
	I1126 20:22:06.047797  261431 system_pods.go:89] "kube-scheduler-embed-certs-949294" [26c4da52-2b0a-4a83-9549-de780ffeddb0] Running
	I1126 20:22:06.047807  261431 system_pods.go:89] "storage-provisioner" [ad12d5e5-d681-4dfc-9970-d2340ac55ed7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 20:22:06.047829  261431 retry.go:31] will retry after 263.412245ms: missing components: kube-dns
	I1126 20:22:06.316625  261431 system_pods.go:86] 8 kube-system pods found
	I1126 20:22:06.316671  261431 system_pods.go:89] "coredns-66bc5c9577-s8rrr" [fb1d5777-ea89-40cb-ae5c-dd8bde47f3de] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:22:06.316679  261431 system_pods.go:89] "etcd-embed-certs-949294" [ae44b779-edef-49c6-8b68-b6114e9a4d68] Running
	I1126 20:22:06.316701  261431 system_pods.go:89] "kindnet-9546l" [5f44d5e6-677c-4df7-9534-bfdf1e6b06b4] Running
	I1126 20:22:06.316711  261431 system_pods.go:89] "kube-apiserver-embed-certs-949294" [25d397fc-5140-407f-80f8-e05072c2d44b] Running
	I1126 20:22:06.316717  261431 system_pods.go:89] "kube-controller-manager-embed-certs-949294" [9f206785-00f8-4fb7-a52c-95ea02516271] Running
	I1126 20:22:06.316723  261431 system_pods.go:89] "kube-proxy-qnjvr" [d9dba8a9-9c13-46e2-9ada-a2b8daca8d73] Running
	I1126 20:22:06.316741  261431 system_pods.go:89] "kube-scheduler-embed-certs-949294" [26c4da52-2b0a-4a83-9549-de780ffeddb0] Running
	I1126 20:22:06.316749  261431 system_pods.go:89] "storage-provisioner" [ad12d5e5-d681-4dfc-9970-d2340ac55ed7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 20:22:06.316766  261431 retry.go:31] will retry after 364.189329ms: missing components: kube-dns
	I1126 20:22:06.685257  261431 system_pods.go:86] 8 kube-system pods found
	I1126 20:22:06.685290  261431 system_pods.go:89] "coredns-66bc5c9577-s8rrr" [fb1d5777-ea89-40cb-ae5c-dd8bde47f3de] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:22:06.685296  261431 system_pods.go:89] "etcd-embed-certs-949294" [ae44b779-edef-49c6-8b68-b6114e9a4d68] Running
	I1126 20:22:06.685301  261431 system_pods.go:89] "kindnet-9546l" [5f44d5e6-677c-4df7-9534-bfdf1e6b06b4] Running
	I1126 20:22:06.685306  261431 system_pods.go:89] "kube-apiserver-embed-certs-949294" [25d397fc-5140-407f-80f8-e05072c2d44b] Running
	I1126 20:22:06.685310  261431 system_pods.go:89] "kube-controller-manager-embed-certs-949294" [9f206785-00f8-4fb7-a52c-95ea02516271] Running
	I1126 20:22:06.685314  261431 system_pods.go:89] "kube-proxy-qnjvr" [d9dba8a9-9c13-46e2-9ada-a2b8daca8d73] Running
	I1126 20:22:06.685317  261431 system_pods.go:89] "kube-scheduler-embed-certs-949294" [26c4da52-2b0a-4a83-9549-de780ffeddb0] Running
	I1126 20:22:06.685321  261431 system_pods.go:89] "storage-provisioner" [ad12d5e5-d681-4dfc-9970-d2340ac55ed7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 20:22:06.685336  261431 retry.go:31] will retry after 366.846011ms: missing components: kube-dns
	I1126 20:22:07.109420  261431 system_pods.go:86] 8 kube-system pods found
	I1126 20:22:07.109483  261431 system_pods.go:89] "coredns-66bc5c9577-s8rrr" [fb1d5777-ea89-40cb-ae5c-dd8bde47f3de] Running
	I1126 20:22:07.109495  261431 system_pods.go:89] "etcd-embed-certs-949294" [ae44b779-edef-49c6-8b68-b6114e9a4d68] Running
	I1126 20:22:07.109501  261431 system_pods.go:89] "kindnet-9546l" [5f44d5e6-677c-4df7-9534-bfdf1e6b06b4] Running
	I1126 20:22:07.109508  261431 system_pods.go:89] "kube-apiserver-embed-certs-949294" [25d397fc-5140-407f-80f8-e05072c2d44b] Running
	I1126 20:22:07.109519  261431 system_pods.go:89] "kube-controller-manager-embed-certs-949294" [9f206785-00f8-4fb7-a52c-95ea02516271] Running
	I1126 20:22:07.109525  261431 system_pods.go:89] "kube-proxy-qnjvr" [d9dba8a9-9c13-46e2-9ada-a2b8daca8d73] Running
	I1126 20:22:07.109539  261431 system_pods.go:89] "kube-scheduler-embed-certs-949294" [26c4da52-2b0a-4a83-9549-de780ffeddb0] Running
	I1126 20:22:07.109546  261431 system_pods.go:89] "storage-provisioner" [ad12d5e5-d681-4dfc-9970-d2340ac55ed7] Running
	I1126 20:22:07.109558  261431 system_pods.go:126] duration metric: took 1.065018599s to wait for k8s-apps to be running ...
	I1126 20:22:07.109572  261431 system_svc.go:44] waiting for kubelet service to be running ....
	I1126 20:22:07.109629  261431 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:22:07.123369  261431 system_svc.go:56] duration metric: took 13.789166ms WaitForService to wait for kubelet
	I1126 20:22:07.123397  261431 kubeadm.go:587] duration metric: took 12.912763607s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1126 20:22:07.123420  261431 node_conditions.go:102] verifying NodePressure condition ...
	I1126 20:22:07.126595  261431 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1126 20:22:07.126619  261431 node_conditions.go:123] node cpu capacity is 8
	I1126 20:22:07.126631  261431 node_conditions.go:105] duration metric: took 3.206434ms to run NodePressure ...
	I1126 20:22:07.126642  261431 start.go:242] waiting for startup goroutines ...
	I1126 20:22:07.126649  261431 start.go:247] waiting for cluster config update ...
	I1126 20:22:07.126658  261431 start.go:256] writing updated cluster config ...
	I1126 20:22:07.126885  261431 ssh_runner.go:195] Run: rm -f paused
	I1126 20:22:07.130655  261431 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 20:22:07.134067  261431 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-s8rrr" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:22:07.138560  261431 pod_ready.go:94] pod "coredns-66bc5c9577-s8rrr" is "Ready"
	I1126 20:22:07.138587  261431 pod_ready.go:86] duration metric: took 4.494796ms for pod "coredns-66bc5c9577-s8rrr" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:22:07.330386  261431 pod_ready.go:83] waiting for pod "etcd-embed-certs-949294" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:22:07.334692  261431 pod_ready.go:94] pod "etcd-embed-certs-949294" is "Ready"
	I1126 20:22:07.334715  261431 pod_ready.go:86] duration metric: took 4.301929ms for pod "etcd-embed-certs-949294" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:22:07.336593  261431 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-949294" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:22:07.340276  261431 pod_ready.go:94] pod "kube-apiserver-embed-certs-949294" is "Ready"
	I1126 20:22:07.340294  261431 pod_ready.go:86] duration metric: took 3.682877ms for pod "kube-apiserver-embed-certs-949294" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:22:07.342010  261431 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-949294" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:22:07.534442  261431 pod_ready.go:94] pod "kube-controller-manager-embed-certs-949294" is "Ready"
	I1126 20:22:07.534492  261431 pod_ready.go:86] duration metric: took 192.457774ms for pod "kube-controller-manager-embed-certs-949294" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:22:07.753667  261431 pod_ready.go:83] waiting for pod "kube-proxy-qnjvr" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:22:08.134178  261431 pod_ready.go:94] pod "kube-proxy-qnjvr" is "Ready"
	I1126 20:22:08.134204  261431 pod_ready.go:86] duration metric: took 380.510028ms for pod "kube-proxy-qnjvr" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:22:08.335500  261431 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-949294" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:22:08.735665  261431 pod_ready.go:94] pod "kube-scheduler-embed-certs-949294" is "Ready"
	I1126 20:22:08.735692  261431 pod_ready.go:86] duration metric: took 400.16533ms for pod "kube-scheduler-embed-certs-949294" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:22:08.735708  261431 pod_ready.go:40] duration metric: took 1.605025098s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 20:22:08.783109  261431 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1126 20:22:08.784962  261431 out.go:179] * Done! kubectl is now configured to use "embed-certs-949294" cluster and "default" namespace by default
	I1126 20:22:04.827324  271308 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1126 20:22:04.827561  271308 start.go:159] libmachine.API.Create for "default-k8s-diff-port-178152" (driver="docker")
	I1126 20:22:04.827593  271308 client.go:173] LocalClient.Create starting
	I1126 20:22:04.827679  271308 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem
	I1126 20:22:04.827716  271308 main.go:143] libmachine: Decoding PEM data...
	I1126 20:22:04.827738  271308 main.go:143] libmachine: Parsing certificate...
	I1126 20:22:04.827802  271308 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem
	I1126 20:22:04.827824  271308 main.go:143] libmachine: Decoding PEM data...
	I1126 20:22:04.827838  271308 main.go:143] libmachine: Parsing certificate...
	I1126 20:22:04.828220  271308 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-178152 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1126 20:22:04.847528  271308 cli_runner.go:211] docker network inspect default-k8s-diff-port-178152 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1126 20:22:04.847600  271308 network_create.go:284] running [docker network inspect default-k8s-diff-port-178152] to gather additional debugging logs...
	I1126 20:22:04.847623  271308 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-178152
	W1126 20:22:04.867905  271308 cli_runner.go:211] docker network inspect default-k8s-diff-port-178152 returned with exit code 1
	I1126 20:22:04.867944  271308 network_create.go:287] error running [docker network inspect default-k8s-diff-port-178152]: docker network inspect default-k8s-diff-port-178152: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-178152 not found
	I1126 20:22:04.867966  271308 network_create.go:289] output of [docker network inspect default-k8s-diff-port-178152]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-178152 not found
	
	** /stderr **
	I1126 20:22:04.868064  271308 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1126 20:22:04.890185  271308 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9f2fbfec5d4b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:aa:65:0e:d3:0d:13} reservation:<nil>}
	I1126 20:22:04.891106  271308 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bbc6aaddce08 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3a:93:ea:3f:0d:7e} reservation:<nil>}
	I1126 20:22:04.891952  271308 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ecd673f5b2f2 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:de:d0:57:1a:06:91} reservation:<nil>}
	I1126 20:22:04.892738  271308 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-2ae6f13df7ae IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ba:83:a1:96:dc:99} reservation:<nil>}
	I1126 20:22:04.893728  271308 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ec8b70}
	I1126 20:22:04.893757  271308 network_create.go:124] attempt to create docker network default-k8s-diff-port-178152 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1126 20:22:04.893806  271308 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-178152 default-k8s-diff-port-178152
	I1126 20:22:04.942722  271308 network_create.go:108] docker network default-k8s-diff-port-178152 192.168.85.0/24 created
	I1126 20:22:04.942755  271308 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-178152" container
	I1126 20:22:04.942806  271308 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1126 20:22:04.960716  271308 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-178152 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-178152 --label created_by.minikube.sigs.k8s.io=true
	I1126 20:22:05.178134  271308 oci.go:103] Successfully created a docker volume default-k8s-diff-port-178152
	I1126 20:22:05.178212  271308 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-178152-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-178152 --entrypoint /usr/bin/test -v default-k8s-diff-port-178152:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1126 20:22:05.578136  271308 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-178152
	I1126 20:22:05.578202  271308 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:22:05.578220  271308 kic.go:194] Starting extracting preloaded images to volume ...
	I1126 20:22:05.578282  271308 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21974-10722/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-178152:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir
	I1126 20:22:08.546269  271308 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21974-10722/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-178152:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir: (2.967942667s)
	I1126 20:22:08.546299  271308 kic.go:203] duration metric: took 2.968075813s to extract preloaded images to volume ...
	W1126 20:22:08.546386  271308 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1126 20:22:08.546425  271308 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1126 20:22:08.546499  271308 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1126 20:22:08.611613  271308 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-178152 --name default-k8s-diff-port-178152 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-178152 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-178152 --network default-k8s-diff-port-178152 --ip 192.168.85.2 --volume default-k8s-diff-port-178152:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1126 20:22:08.980723  271308 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-178152 --format={{.State.Running}}
	I1126 20:22:09.003717  271308 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-178152 --format={{.State.Status}}
	I1126 20:22:09.021327  271308 cli_runner.go:164] Run: docker exec default-k8s-diff-port-178152 stat /var/lib/dpkg/alternatives/iptables
	I1126 20:22:09.071731  271308 oci.go:144] the created container "default-k8s-diff-port-178152" has a running status.
	I1126 20:22:09.071768  271308 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21974-10722/.minikube/machines/default-k8s-diff-port-178152/id_rsa...
	I1126 20:22:09.107238  271308 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21974-10722/.minikube/machines/default-k8s-diff-port-178152/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1126 20:22:09.141646  271308 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-178152 --format={{.State.Status}}
	I1126 20:22:09.163343  271308 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1126 20:22:09.163363  271308 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-178152 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1126 20:22:09.210974  271308 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-178152 --format={{.State.Status}}
	I1126 20:22:09.235839  271308 machine.go:94] provisionDockerMachine start ...
	I1126 20:22:09.235920  271308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-178152
	I1126 20:22:09.256630  271308 main.go:143] libmachine: Using SSH client type: native
	I1126 20:22:09.256857  271308 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1126 20:22:09.256864  271308 main.go:143] libmachine: About to run SSH command:
	hostname
	I1126 20:22:09.257590  271308 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47692->127.0.0.1:33073: read: connection reset by peer
	I1126 20:22:05.678431  271769 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1126 20:22:05.678722  271769 start.go:159] libmachine.API.Create for "newest-cni-297942" (driver="docker")
	I1126 20:22:05.678764  271769 client.go:173] LocalClient.Create starting
	I1126 20:22:05.678856  271769 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem
	I1126 20:22:05.678896  271769 main.go:143] libmachine: Decoding PEM data...
	I1126 20:22:05.678922  271769 main.go:143] libmachine: Parsing certificate...
	I1126 20:22:05.678995  271769 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem
	I1126 20:22:05.679022  271769 main.go:143] libmachine: Decoding PEM data...
	I1126 20:22:05.679039  271769 main.go:143] libmachine: Parsing certificate...
	I1126 20:22:05.679441  271769 cli_runner.go:164] Run: docker network inspect newest-cni-297942 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1126 20:22:05.699514  271769 cli_runner.go:211] docker network inspect newest-cni-297942 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1126 20:22:05.699580  271769 network_create.go:284] running [docker network inspect newest-cni-297942] to gather additional debugging logs...
	I1126 20:22:05.699602  271769 cli_runner.go:164] Run: docker network inspect newest-cni-297942
	W1126 20:22:05.722635  271769 cli_runner.go:211] docker network inspect newest-cni-297942 returned with exit code 1
	I1126 20:22:05.722671  271769 network_create.go:287] error running [docker network inspect newest-cni-297942]: docker network inspect newest-cni-297942: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-297942 not found
	I1126 20:22:05.722686  271769 network_create.go:289] output of [docker network inspect newest-cni-297942]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-297942 not found
	
	** /stderr **
	I1126 20:22:05.722798  271769 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1126 20:22:05.743162  271769 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9f2fbfec5d4b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:aa:65:0e:d3:0d:13} reservation:<nil>}
	I1126 20:22:05.743885  271769 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bbc6aaddce08 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3a:93:ea:3f:0d:7e} reservation:<nil>}
	I1126 20:22:05.744626  271769 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ecd673f5b2f2 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:de:d0:57:1a:06:91} reservation:<nil>}
	I1126 20:22:05.745129  271769 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-2ae6f13df7ae IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ba:83:a1:96:dc:99} reservation:<nil>}
	I1126 20:22:05.745800  271769 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-ec68256d4118 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:72:5d:f9:71:de:9b} reservation:<nil>}
	I1126 20:22:05.746353  271769 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-7fd9c7914891 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:ee:1d:e0:51:23:a7} reservation:<nil>}
	I1126 20:22:05.747164  271769 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f4cf80}
	I1126 20:22:05.747188  271769 network_create.go:124] attempt to create docker network newest-cni-297942 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1126 20:22:05.747246  271769 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-297942 newest-cni-297942
	I1126 20:22:05.801243  271769 network_create.go:108] docker network newest-cni-297942 192.168.103.0/24 created
	I1126 20:22:05.801281  271769 kic.go:121] calculated static IP "192.168.103.2" for the "newest-cni-297942" container
	I1126 20:22:05.801353  271769 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1126 20:22:05.822765  271769 cli_runner.go:164] Run: docker volume create newest-cni-297942 --label name.minikube.sigs.k8s.io=newest-cni-297942 --label created_by.minikube.sigs.k8s.io=true
	I1126 20:22:05.842825  271769 oci.go:103] Successfully created a docker volume newest-cni-297942
	I1126 20:22:05.842924  271769 cli_runner.go:164] Run: docker run --rm --name newest-cni-297942-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-297942 --entrypoint /usr/bin/test -v newest-cni-297942:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1126 20:22:07.127932  271769 cli_runner.go:217] Completed: docker run --rm --name newest-cni-297942-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-297942 --entrypoint /usr/bin/test -v newest-cni-297942:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib: (1.284967347s)
	I1126 20:22:07.127968  271769 oci.go:107] Successfully prepared a docker volume newest-cni-297942
	I1126 20:22:07.128030  271769 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:22:07.128045  271769 kic.go:194] Starting extracting preloaded images to volume ...
	I1126 20:22:07.128100  271769 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21974-10722/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-297942:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir
	I1126 20:22:12.400263  271308 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-178152
	
	I1126 20:22:12.400289  271308 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-178152"
	I1126 20:22:12.400349  271308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-178152
	I1126 20:22:12.417944  271308 main.go:143] libmachine: Using SSH client type: native
	I1126 20:22:12.418160  271308 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1126 20:22:12.418173  271308 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-178152 && echo "default-k8s-diff-port-178152" | sudo tee /etc/hostname
	I1126 20:22:12.570120  271308 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-178152
	
	I1126 20:22:12.570202  271308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-178152
	I1126 20:22:12.594552  271308 main.go:143] libmachine: Using SSH client type: native
	I1126 20:22:12.594824  271308 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1126 20:22:12.594854  271308 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-178152' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-178152/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-178152' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1126 20:22:12.744968  271308 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1126 20:22:12.744994  271308 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21974-10722/.minikube CaCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21974-10722/.minikube}
	I1126 20:22:12.745036  271308 ubuntu.go:190] setting up certificates
	I1126 20:22:12.745049  271308 provision.go:84] configureAuth start
	I1126 20:22:12.745102  271308 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-178152
	I1126 20:22:12.771569  271308 provision.go:143] copyHostCerts
	I1126 20:22:12.771625  271308 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-10722/.minikube/key.pem, removing ...
	I1126 20:22:12.771639  271308 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-10722/.minikube/key.pem
	I1126 20:22:12.772217  271308 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/key.pem (1675 bytes)
	I1126 20:22:12.772313  271308 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-10722/.minikube/ca.pem, removing ...
	I1126 20:22:12.772322  271308 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-10722/.minikube/ca.pem
	I1126 20:22:12.772352  271308 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/ca.pem (1078 bytes)
	I1126 20:22:12.772407  271308 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-10722/.minikube/cert.pem, removing ...
	I1126 20:22:12.772415  271308 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-10722/.minikube/cert.pem
	I1126 20:22:12.772437  271308 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/cert.pem (1123 bytes)
	I1126 20:22:12.772513  271308 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-178152 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-178152 localhost minikube]
	I1126 20:22:12.836738  271308 provision.go:177] copyRemoteCerts
	I1126 20:22:12.836789  271308 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1126 20:22:12.836836  271308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-178152
	I1126 20:22:12.855976  271308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/default-k8s-diff-port-178152/id_rsa Username:docker}
	I1126 20:22:12.957524  271308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1126 20:22:12.979200  271308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1126 20:22:12.997619  271308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1126 20:22:13.016414  271308 provision.go:87] duration metric: took 271.353082ms to configureAuth
	I1126 20:22:13.016440  271308 ubuntu.go:206] setting minikube options for container-runtime
	I1126 20:22:13.016610  271308 config.go:182] Loaded profile config "default-k8s-diff-port-178152": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:22:13.016721  271308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-178152
	I1126 20:22:13.037662  271308 main.go:143] libmachine: Using SSH client type: native
	I1126 20:22:13.037897  271308 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1126 20:22:13.037915  271308 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1126 20:22:13.321631  271308 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1126 20:22:13.321665  271308 machine.go:97] duration metric: took 4.085805365s to provisionDockerMachine
	I1126 20:22:13.321678  271308 client.go:176] duration metric: took 8.494076321s to LocalClient.Create
	I1126 20:22:13.321700  271308 start.go:167] duration metric: took 8.494139541s to libmachine.API.Create "default-k8s-diff-port-178152"
	I1126 20:22:13.321725  271308 start.go:293] postStartSetup for "default-k8s-diff-port-178152" (driver="docker")
	I1126 20:22:13.321738  271308 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1126 20:22:13.321801  271308 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1126 20:22:13.321889  271308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-178152
	I1126 20:22:13.343362  271308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/default-k8s-diff-port-178152/id_rsa Username:docker}
	I1126 20:22:13.446359  271308 ssh_runner.go:195] Run: cat /etc/os-release
	I1126 20:22:13.449866  271308 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1126 20:22:13.449898  271308 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1126 20:22:13.449909  271308 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-10722/.minikube/addons for local assets ...
	I1126 20:22:13.449954  271308 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-10722/.minikube/files for local assets ...
	I1126 20:22:13.450053  271308 filesync.go:149] local asset: /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem -> 142582.pem in /etc/ssl/certs
	I1126 20:22:13.450179  271308 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1126 20:22:13.457793  271308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem --> /etc/ssl/certs/142582.pem (1708 bytes)
	I1126 20:22:13.476179  271308 start.go:296] duration metric: took 154.442745ms for postStartSetup
	I1126 20:22:13.476507  271308 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-178152
	I1126 20:22:13.494478  271308 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/default-k8s-diff-port-178152/config.json ...
	I1126 20:22:13.494708  271308 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1126 20:22:13.494752  271308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-178152
	I1126 20:22:13.511824  271308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/default-k8s-diff-port-178152/id_rsa Username:docker}
	I1126 20:22:13.607640  271308 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1126 20:22:13.612038  271308 start.go:128] duration metric: took 8.786118327s to createHost
	I1126 20:22:13.612061  271308 start.go:83] releasing machines lock for "default-k8s-diff-port-178152", held for 8.786266352s
	I1126 20:22:13.612131  271308 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-178152
	I1126 20:22:13.629845  271308 ssh_runner.go:195] Run: cat /version.json
	I1126 20:22:13.629892  271308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-178152
	I1126 20:22:13.629914  271308 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1126 20:22:13.629979  271308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-178152
	I1126 20:22:13.649646  271308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/default-k8s-diff-port-178152/id_rsa Username:docker}
	I1126 20:22:13.650751  271308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/default-k8s-diff-port-178152/id_rsa Username:docker}
	I1126 20:22:13.828234  271308 ssh_runner.go:195] Run: systemctl --version
	I1126 20:22:13.834541  271308 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1126 20:22:13.867131  271308 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1126 20:22:13.871450  271308 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1126 20:22:13.871532  271308 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1126 20:22:13.894592  271308 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1126 20:22:13.894614  271308 start.go:496] detecting cgroup driver to use...
	I1126 20:22:13.894645  271308 detect.go:190] detected "systemd" cgroup driver on host os
	I1126 20:22:13.894686  271308 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1126 20:22:13.909669  271308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1126 20:22:13.922163  271308 docker.go:218] disabling cri-docker service (if available) ...
	I1126 20:22:13.922213  271308 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1126 20:22:13.939168  271308 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1126 20:22:13.957373  271308 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1126 20:22:14.046606  271308 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1126 20:22:14.140987  271308 docker.go:234] disabling docker service ...
	I1126 20:22:14.141047  271308 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1126 20:22:14.160266  271308 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1126 20:22:14.172668  271308 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1126 20:22:14.260251  271308 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1126 20:22:14.352226  271308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1126 20:22:14.364840  271308 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1126 20:22:14.379557  271308 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1126 20:22:14.379621  271308 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:22:14.389366  271308 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1126 20:22:14.389421  271308 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:22:14.398259  271308 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:22:14.406434  271308 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:22:14.414620  271308 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1126 20:22:14.423189  271308 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:22:14.431800  271308 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:22:14.445080  271308 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:22:14.454301  271308 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1126 20:22:14.461576  271308 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1126 20:22:14.468514  271308 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:22:14.555538  271308 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1126 20:22:14.708451  271308 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1126 20:22:14.708525  271308 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1126 20:22:14.712792  271308 start.go:564] Will wait 60s for crictl version
	I1126 20:22:14.712850  271308 ssh_runner.go:195] Run: which crictl
	I1126 20:22:14.716486  271308 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1126 20:22:14.744383  271308 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1126 20:22:14.744440  271308 ssh_runner.go:195] Run: crio --version
	I1126 20:22:14.772866  271308 ssh_runner.go:195] Run: crio --version
	I1126 20:22:14.801350  271308 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1126 20:22:12.092555  271769 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21974-10722/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-297942:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir: (4.964377487s)
	I1126 20:22:12.092594  271769 kic.go:203] duration metric: took 4.964546122s to extract preloaded images to volume ...
	W1126 20:22:12.092712  271769 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1126 20:22:12.092741  271769 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1126 20:22:12.092788  271769 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1126 20:22:12.146696  271769 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-297942 --name newest-cni-297942 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-297942 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-297942 --network newest-cni-297942 --ip 192.168.103.2 --volume newest-cni-297942:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1126 20:22:12.444315  271769 cli_runner.go:164] Run: docker container inspect newest-cni-297942 --format={{.State.Running}}
	I1126 20:22:12.462853  271769 cli_runner.go:164] Run: docker container inspect newest-cni-297942 --format={{.State.Status}}
	I1126 20:22:12.482053  271769 cli_runner.go:164] Run: docker exec newest-cni-297942 stat /var/lib/dpkg/alternatives/iptables
	I1126 20:22:12.538306  271769 oci.go:144] the created container "newest-cni-297942" has a running status.
	I1126 20:22:12.538349  271769 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21974-10722/.minikube/machines/newest-cni-297942/id_rsa...
	I1126 20:22:12.617056  271769 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21974-10722/.minikube/machines/newest-cni-297942/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1126 20:22:12.640104  271769 cli_runner.go:164] Run: docker container inspect newest-cni-297942 --format={{.State.Status}}
	I1126 20:22:12.661548  271769 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1126 20:22:12.661572  271769 kic_runner.go:114] Args: [docker exec --privileged newest-cni-297942 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1126 20:22:12.711668  271769 cli_runner.go:164] Run: docker container inspect newest-cni-297942 --format={{.State.Status}}
	I1126 20:22:12.734790  271769 machine.go:94] provisionDockerMachine start ...
	I1126 20:22:12.734904  271769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-297942
	I1126 20:22:12.759396  271769 main.go:143] libmachine: Using SSH client type: native
	I1126 20:22:12.759767  271769 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1126 20:22:12.759792  271769 main.go:143] libmachine: About to run SSH command:
	hostname
	I1126 20:22:12.760697  271769 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49844->127.0.0.1:33078: read: connection reset by peer
	
	
	==> CRI-O <==
	Nov 26 20:22:06 embed-certs-949294 crio[776]: time="2025-11-26T20:22:06.460687564Z" level=info msg="Starting container: e92255966539ff429ec2a7925e0a8c0263e059d1e71b11bedb3dbed681974c06" id=4c0847b4-7f1e-468a-984b-c12272d3c46a name=/runtime.v1.RuntimeService/StartContainer
	Nov 26 20:22:06 embed-certs-949294 crio[776]: time="2025-11-26T20:22:06.462891791Z" level=info msg="Started container" PID=1837 containerID=e92255966539ff429ec2a7925e0a8c0263e059d1e71b11bedb3dbed681974c06 description=kube-system/coredns-66bc5c9577-s8rrr/coredns id=4c0847b4-7f1e-468a-984b-c12272d3c46a name=/runtime.v1.RuntimeService/StartContainer sandboxID=f9b2a9c9d9909ee60859a478d14b24b5a609f8521f6b0dcb6816f8d6b90eb0d9
	Nov 26 20:22:09 embed-certs-949294 crio[776]: time="2025-11-26T20:22:09.282841378Z" level=info msg="Running pod sandbox: default/busybox/POD" id=e30a31a3-9a87-4f5e-bff0-3d3fb61b089c name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 26 20:22:09 embed-certs-949294 crio[776]: time="2025-11-26T20:22:09.282930614Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:22:09 embed-certs-949294 crio[776]: time="2025-11-26T20:22:09.291872764Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:3db3585da568568e6977a0d689d4253cb97d9ea6af51af3082b4bf6ffa5d2603 UID:bbd3f1ad-5639-44ac-bed1-8de1e6b81907 NetNS:/var/run/netns/bbfbfcea-9732-472a-b9c3-b60cd4db0879 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000510828}] Aliases:map[]}"
	Nov 26 20:22:09 embed-certs-949294 crio[776]: time="2025-11-26T20:22:09.291913954Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 26 20:22:09 embed-certs-949294 crio[776]: time="2025-11-26T20:22:09.30779788Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:3db3585da568568e6977a0d689d4253cb97d9ea6af51af3082b4bf6ffa5d2603 UID:bbd3f1ad-5639-44ac-bed1-8de1e6b81907 NetNS:/var/run/netns/bbfbfcea-9732-472a-b9c3-b60cd4db0879 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000510828}] Aliases:map[]}"
	Nov 26 20:22:09 embed-certs-949294 crio[776]: time="2025-11-26T20:22:09.308122001Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 26 20:22:09 embed-certs-949294 crio[776]: time="2025-11-26T20:22:09.309271052Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 26 20:22:09 embed-certs-949294 crio[776]: time="2025-11-26T20:22:09.310648677Z" level=info msg="Ran pod sandbox 3db3585da568568e6977a0d689d4253cb97d9ea6af51af3082b4bf6ffa5d2603 with infra container: default/busybox/POD" id=e30a31a3-9a87-4f5e-bff0-3d3fb61b089c name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 26 20:22:09 embed-certs-949294 crio[776]: time="2025-11-26T20:22:09.311993647Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5b79572e-8b99-4463-bd32-e1840e288d48 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:22:09 embed-certs-949294 crio[776]: time="2025-11-26T20:22:09.312204459Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=5b79572e-8b99-4463-bd32-e1840e288d48 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:22:09 embed-certs-949294 crio[776]: time="2025-11-26T20:22:09.31225077Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=5b79572e-8b99-4463-bd32-e1840e288d48 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:22:09 embed-certs-949294 crio[776]: time="2025-11-26T20:22:09.313153022Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b5858bae-169a-45b9-8337-374843b693ab name=/runtime.v1.ImageService/PullImage
	Nov 26 20:22:09 embed-certs-949294 crio[776]: time="2025-11-26T20:22:09.315296401Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 26 20:22:10 embed-certs-949294 crio[776]: time="2025-11-26T20:22:10.986283058Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=b5858bae-169a-45b9-8337-374843b693ab name=/runtime.v1.ImageService/PullImage
	Nov 26 20:22:10 embed-certs-949294 crio[776]: time="2025-11-26T20:22:10.987157904Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=41271094-377d-4222-8357-d552ef9729de name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:22:10 embed-certs-949294 crio[776]: time="2025-11-26T20:22:10.988845678Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=751c96f4-2456-4a79-bf76-2dfae9fd61c4 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:22:10 embed-certs-949294 crio[776]: time="2025-11-26T20:22:10.992151405Z" level=info msg="Creating container: default/busybox/busybox" id=64cbc10c-cdb4-4925-a23f-e030c74dff4a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:22:10 embed-certs-949294 crio[776]: time="2025-11-26T20:22:10.992279738Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:22:10 embed-certs-949294 crio[776]: time="2025-11-26T20:22:10.995971179Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:22:10 embed-certs-949294 crio[776]: time="2025-11-26T20:22:10.996711008Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:22:11 embed-certs-949294 crio[776]: time="2025-11-26T20:22:11.016309322Z" level=info msg="Created container 1eb7394a29441e1876c38d8ebc1cf651bbe8bab6fed99e65bb275eeec7419a08: default/busybox/busybox" id=64cbc10c-cdb4-4925-a23f-e030c74dff4a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:22:11 embed-certs-949294 crio[776]: time="2025-11-26T20:22:11.016993026Z" level=info msg="Starting container: 1eb7394a29441e1876c38d8ebc1cf651bbe8bab6fed99e65bb275eeec7419a08" id=202ced51-b801-48f9-bb8f-3407c24fa066 name=/runtime.v1.RuntimeService/StartContainer
	Nov 26 20:22:11 embed-certs-949294 crio[776]: time="2025-11-26T20:22:11.019226879Z" level=info msg="Started container" PID=1912 containerID=1eb7394a29441e1876c38d8ebc1cf651bbe8bab6fed99e65bb275eeec7419a08 description=default/busybox/busybox id=202ced51-b801-48f9-bb8f-3407c24fa066 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3db3585da568568e6977a0d689d4253cb97d9ea6af51af3082b4bf6ffa5d2603
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	1eb7394a29441       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   3db3585da5685       busybox                                      default
	e92255966539f       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 seconds ago      Running             coredns                   0                   f9b2a9c9d9909       coredns-66bc5c9577-s8rrr                     kube-system
	4ff0328054c3e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago      Running             storage-provisioner       0                   b913061852a1f       storage-provisioner                          kube-system
	a18a9e9e54749       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      23 seconds ago      Running             kube-proxy                0                   b55299e801880       kube-proxy-qnjvr                             kube-system
	89156c67fc329       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      23 seconds ago      Running             kindnet-cni               0                   b0642b79ba5c4       kindnet-9546l                                kube-system
	321f715d0bd24       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      34 seconds ago      Running             etcd                      0                   b3567bbc94690       etcd-embed-certs-949294                      kube-system
	f4abe3c92b6b7       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      34 seconds ago      Running             kube-controller-manager   0                   0a0ab54860976       kube-controller-manager-embed-certs-949294   kube-system
	1874ca6c53841       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      34 seconds ago      Running             kube-scheduler            0                   fbde3eccba932       kube-scheduler-embed-certs-949294            kube-system
	6d014bb5b0e8c       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      34 seconds ago      Running             kube-apiserver            0                   71a8401306df3       kube-apiserver-embed-certs-949294            kube-system
	
	
	==> coredns [e92255966539ff429ec2a7925e0a8c0263e059d1e71b11bedb3dbed681974c06] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48025 - 16921 "HINFO IN 2933187905863120186.8839108617349789132. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.907794408s
	
	
	==> describe nodes <==
	Name:               embed-certs-949294
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-949294
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970
	                    minikube.k8s.io/name=embed-certs-949294
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_26T20_21_49_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 26 Nov 2025 20:21:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-949294
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 26 Nov 2025 20:22:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 26 Nov 2025 20:22:09 +0000   Wed, 26 Nov 2025 20:21:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 26 Nov 2025 20:22:09 +0000   Wed, 26 Nov 2025 20:21:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 26 Nov 2025 20:22:09 +0000   Wed, 26 Nov 2025 20:21:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 26 Nov 2025 20:22:09 +0000   Wed, 26 Nov 2025 20:22:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-949294
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                aa80874f-b877-4d80-93ab-b99d96f2b5aa
	  Boot ID:                    4ab83601-3d95-4490-96bc-14b416ba2714
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-s8rrr                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     24s
	  kube-system                 etcd-embed-certs-949294                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-9546l                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-embed-certs-949294             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-embed-certs-949294    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-qnjvr                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-embed-certs-949294             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 22s   kube-proxy       
	  Normal  Starting                 30s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s   kubelet          Node embed-certs-949294 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s   kubelet          Node embed-certs-949294 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s   kubelet          Node embed-certs-949294 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s   node-controller  Node embed-certs-949294 event: Registered Node embed-certs-949294 in Controller
	  Normal  NodeReady                13s   kubelet          Node embed-certs-949294 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.091832] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023778] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.010162] kauditd_printk_skb: 47 callbacks suppressed
	[Nov26 19:37] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.011177] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.022896] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.024869] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.022915] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.023864] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +2.047793] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +4.032568] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +8.126198] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[ +16.382331] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[Nov26 19:38] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	
	
	==> etcd [321f715d0bd2486f35c8e22d6cdda6bbf3b289abc55254b28390af2406fdd676] <==
	{"level":"warn","ts":"2025-11-26T20:21:45.569108Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:21:45.575106Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:21:45.582308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:21:45.589804Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:21:45.595807Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:21:45.602684Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:21:45.608805Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:21:45.615700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:21:45.627563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:21:45.634226Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:21:45.640313Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:21:45.646498Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:21:45.652527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:21:45.658625Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:21:45.666069Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:21:45.673795Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:21:45.680784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:21:45.695783Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:21:45.702267Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:21:45.718152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:21:45.725652Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:21:45.731973Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:21:45.799828Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46104","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-26T20:22:06.844161Z","caller":"traceutil/trace.go:172","msg":"trace[430127849] transaction","detail":"{read_only:false; response_revision:414; number_of_response:1; }","duration":"106.049622ms","start":"2025-11-26T20:22:06.738088Z","end":"2025-11-26T20:22:06.844137Z","steps":["trace[430127849] 'process raft request'  (duration: 105.885027ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-26T20:22:07.106921Z","caller":"traceutil/trace.go:172","msg":"trace[1554619488] transaction","detail":"{read_only:false; response_revision:419; number_of_response:1; }","duration":"133.233185ms","start":"2025-11-26T20:22:06.973667Z","end":"2025-11-26T20:22:07.106900Z","steps":["trace[1554619488] 'process raft request'  (duration: 97.901121ms)","trace[1554619488] 'compare'  (duration: 35.21778ms)"],"step_count":2}
	
	
	==> kernel <==
	 20:22:18 up  1:04,  0 user,  load average: 2.92, 2.91, 1.96
	Linux embed-certs-949294 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [89156c67fc3296df9ef1b69a24a288838953485dfa247ac91c86b3f3f819a18c] <==
	I1126 20:21:55.484728       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1126 20:21:55.485009       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1126 20:21:55.485141       1 main.go:148] setting mtu 1500 for CNI 
	I1126 20:21:55.485161       1 main.go:178] kindnetd IP family: "ipv4"
	I1126 20:21:55.485187       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-26T20:21:55Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1126 20:21:55.784449       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1126 20:21:55.880586       1 controller.go:381] "Waiting for informer caches to sync"
	I1126 20:21:55.880687       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1126 20:21:55.881747       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1126 20:21:56.181503       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1126 20:21:56.181632       1 metrics.go:72] Registering metrics
	I1126 20:21:56.279281       1 controller.go:711] "Syncing nftables rules"
	I1126 20:22:05.784625       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1126 20:22:05.784686       1 main.go:301] handling current node
	I1126 20:22:15.786586       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1126 20:22:15.786619       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6d014bb5b0e8cfd1c85eb03493e2a908d1c976b47feba8191a31512495755f40] <==
	I1126 20:21:46.302185       1 controller.go:667] quota admission added evaluator for: namespaces
	I1126 20:21:46.302490       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1126 20:21:46.302949       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1126 20:21:46.309268       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1126 20:21:46.309537       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1126 20:21:46.309694       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1126 20:21:46.499160       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1126 20:21:47.202741       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1126 20:21:47.206431       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1126 20:21:47.206449       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1126 20:21:47.636195       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1126 20:21:47.667667       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1126 20:21:47.705573       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1126 20:21:47.711089       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1126 20:21:47.711990       1 controller.go:667] quota admission added evaluator for: endpoints
	I1126 20:21:47.715788       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1126 20:21:48.248712       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1126 20:21:48.704246       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1126 20:21:48.718000       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1126 20:21:48.728650       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1126 20:21:53.400495       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1126 20:21:53.450577       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1126 20:21:54.153181       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1126 20:21:54.160361       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1126 20:22:17.065804       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8443->192.168.94.1:35236: use of closed network connection
	
	
	==> kube-controller-manager [f4abe3c92b6b7d61ebf58e3205e4a7810a3aca92e5d8a89a621c73ccea9ed8cd] <==
	I1126 20:21:53.249151       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1126 20:21:53.249301       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1126 20:21:53.249804       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1126 20:21:53.251043       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1126 20:21:53.251062       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1126 20:21:53.251140       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1126 20:21:53.255304       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1126 20:21:53.255368       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1126 20:21:53.255417       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1126 20:21:53.255428       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1126 20:21:53.255435       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1126 20:21:53.255370       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1126 20:21:53.255468       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1126 20:21:53.255482       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1126 20:21:53.260773       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1126 20:21:53.261137       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-949294" podCIDRs=["10.244.0.0/24"]
	I1126 20:21:53.262600       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1126 20:21:53.269738       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1126 20:21:53.276070       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1126 20:21:53.276222       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1126 20:21:53.276325       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-949294"
	I1126 20:21:53.276378       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1126 20:21:53.282429       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1126 20:21:53.291860       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1126 20:22:08.278649       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [a18a9e9e5474950792e8560b3fd444a2355ca9ac0e86f2729a20f768ffa1f893] <==
	I1126 20:21:55.359401       1 server_linux.go:53] "Using iptables proxy"
	I1126 20:21:55.424118       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1126 20:21:55.524308       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1126 20:21:55.524348       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1126 20:21:55.524509       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1126 20:21:55.544652       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1126 20:21:55.544700       1 server_linux.go:132] "Using iptables Proxier"
	I1126 20:21:55.549997       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1126 20:21:55.550349       1 server.go:527] "Version info" version="v1.34.1"
	I1126 20:21:55.550384       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 20:21:55.551955       1 config.go:200] "Starting service config controller"
	I1126 20:21:55.551977       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1126 20:21:55.551989       1 config.go:106] "Starting endpoint slice config controller"
	I1126 20:21:55.552006       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1126 20:21:55.552116       1 config.go:309] "Starting node config controller"
	I1126 20:21:55.552124       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1126 20:21:55.552295       1 config.go:403] "Starting serviceCIDR config controller"
	I1126 20:21:55.552306       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1126 20:21:55.652606       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1126 20:21:55.652639       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1126 20:21:55.652649       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1126 20:21:55.652728       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [1874ca6c538413c790b1032c7f238b31ea2850c148a4fc04a8f5f8d1dda10d13] <==
	E1126 20:21:46.264167       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1126 20:21:46.264229       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1126 20:21:46.264247       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1126 20:21:46.264256       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1126 20:21:46.264295       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1126 20:21:46.264403       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1126 20:21:46.264686       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1126 20:21:46.264711       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1126 20:21:46.264768       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1126 20:21:46.264925       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1126 20:21:46.265225       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1126 20:21:46.265295       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1126 20:21:46.266244       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1126 20:21:46.266341       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1126 20:21:46.266361       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1126 20:21:47.070814       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1126 20:21:47.131949       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1126 20:21:47.175312       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1126 20:21:47.251882       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1126 20:21:47.260311       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1126 20:21:47.290764       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1126 20:21:47.295552       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1126 20:21:47.321908       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1126 20:21:47.380545       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	I1126 20:21:50.359126       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 26 20:21:53 embed-certs-949294 kubelet[1316]: I1126 20:21:53.436646    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/5f44d5e6-677c-4df7-9534-bfdf1e6b06b4-cni-cfg\") pod \"kindnet-9546l\" (UID: \"5f44d5e6-677c-4df7-9534-bfdf1e6b06b4\") " pod="kube-system/kindnet-9546l"
	Nov 26 20:21:53 embed-certs-949294 kubelet[1316]: E1126 20:21:53.544047    1316 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 26 20:21:53 embed-certs-949294 kubelet[1316]: E1126 20:21:53.544086    1316 projected.go:196] Error preparing data for projected volume kube-api-access-d7lnj for pod kube-system/kube-proxy-qnjvr: configmap "kube-root-ca.crt" not found
	Nov 26 20:21:53 embed-certs-949294 kubelet[1316]: E1126 20:21:53.544135    1316 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 26 20:21:53 embed-certs-949294 kubelet[1316]: E1126 20:21:53.544174    1316 projected.go:196] Error preparing data for projected volume kube-api-access-6dh45 for pod kube-system/kindnet-9546l: configmap "kube-root-ca.crt" not found
	Nov 26 20:21:53 embed-certs-949294 kubelet[1316]: E1126 20:21:53.544186    1316 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d9dba8a9-9c13-46e2-9ada-a2b8daca8d73-kube-api-access-d7lnj podName:d9dba8a9-9c13-46e2-9ada-a2b8daca8d73 nodeName:}" failed. No retries permitted until 2025-11-26 20:21:54.044163057 +0000 UTC m=+5.541800941 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-d7lnj" (UniqueName: "kubernetes.io/projected/d9dba8a9-9c13-46e2-9ada-a2b8daca8d73-kube-api-access-d7lnj") pod "kube-proxy-qnjvr" (UID: "d9dba8a9-9c13-46e2-9ada-a2b8daca8d73") : configmap "kube-root-ca.crt" not found
	Nov 26 20:21:53 embed-certs-949294 kubelet[1316]: E1126 20:21:53.544229    1316 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5f44d5e6-677c-4df7-9534-bfdf1e6b06b4-kube-api-access-6dh45 podName:5f44d5e6-677c-4df7-9534-bfdf1e6b06b4 nodeName:}" failed. No retries permitted until 2025-11-26 20:21:54.044209771 +0000 UTC m=+5.541847656 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6dh45" (UniqueName: "kubernetes.io/projected/5f44d5e6-677c-4df7-9534-bfdf1e6b06b4-kube-api-access-6dh45") pod "kindnet-9546l" (UID: "5f44d5e6-677c-4df7-9534-bfdf1e6b06b4") : configmap "kube-root-ca.crt" not found
	Nov 26 20:21:54 embed-certs-949294 kubelet[1316]: E1126 20:21:54.143338    1316 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 26 20:21:54 embed-certs-949294 kubelet[1316]: E1126 20:21:54.143384    1316 projected.go:196] Error preparing data for projected volume kube-api-access-d7lnj for pod kube-system/kube-proxy-qnjvr: configmap "kube-root-ca.crt" not found
	Nov 26 20:21:54 embed-certs-949294 kubelet[1316]: E1126 20:21:54.143451    1316 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d9dba8a9-9c13-46e2-9ada-a2b8daca8d73-kube-api-access-d7lnj podName:d9dba8a9-9c13-46e2-9ada-a2b8daca8d73 nodeName:}" failed. No retries permitted until 2025-11-26 20:21:55.143429318 +0000 UTC m=+6.641067203 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-d7lnj" (UniqueName: "kubernetes.io/projected/d9dba8a9-9c13-46e2-9ada-a2b8daca8d73-kube-api-access-d7lnj") pod "kube-proxy-qnjvr" (UID: "d9dba8a9-9c13-46e2-9ada-a2b8daca8d73") : configmap "kube-root-ca.crt" not found
	Nov 26 20:21:54 embed-certs-949294 kubelet[1316]: E1126 20:21:54.143346    1316 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 26 20:21:54 embed-certs-949294 kubelet[1316]: E1126 20:21:54.143521    1316 projected.go:196] Error preparing data for projected volume kube-api-access-6dh45 for pod kube-system/kindnet-9546l: configmap "kube-root-ca.crt" not found
	Nov 26 20:21:54 embed-certs-949294 kubelet[1316]: E1126 20:21:54.143593    1316 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5f44d5e6-677c-4df7-9534-bfdf1e6b06b4-kube-api-access-6dh45 podName:5f44d5e6-677c-4df7-9534-bfdf1e6b06b4 nodeName:}" failed. No retries permitted until 2025-11-26 20:21:55.143564646 +0000 UTC m=+6.641202546 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-6dh45" (UniqueName: "kubernetes.io/projected/5f44d5e6-677c-4df7-9534-bfdf1e6b06b4-kube-api-access-6dh45") pod "kindnet-9546l" (UID: "5f44d5e6-677c-4df7-9534-bfdf1e6b06b4") : configmap "kube-root-ca.crt" not found
	Nov 26 20:21:55 embed-certs-949294 kubelet[1316]: I1126 20:21:55.728673    1316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-9546l" podStartSLOduration=2.728651286 podStartE2EDuration="2.728651286s" podCreationTimestamp="2025-11-26 20:21:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 20:21:55.7280375 +0000 UTC m=+7.225675405" watchObservedRunningTime="2025-11-26 20:21:55.728651286 +0000 UTC m=+7.226289192"
	Nov 26 20:21:56 embed-certs-949294 kubelet[1316]: I1126 20:21:56.065700    1316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qnjvr" podStartSLOduration=3.065678027 podStartE2EDuration="3.065678027s" podCreationTimestamp="2025-11-26 20:21:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 20:21:55.740483689 +0000 UTC m=+7.238121582" watchObservedRunningTime="2025-11-26 20:21:56.065678027 +0000 UTC m=+7.563315954"
	Nov 26 20:22:05 embed-certs-949294 kubelet[1316]: I1126 20:22:05.895188    1316 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 26 20:22:06 embed-certs-949294 kubelet[1316]: I1126 20:22:06.026752    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb1d5777-ea89-40cb-ae5c-dd8bde47f3de-config-volume\") pod \"coredns-66bc5c9577-s8rrr\" (UID: \"fb1d5777-ea89-40cb-ae5c-dd8bde47f3de\") " pod="kube-system/coredns-66bc5c9577-s8rrr"
	Nov 26 20:22:06 embed-certs-949294 kubelet[1316]: I1126 20:22:06.026809    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z66dj\" (UniqueName: \"kubernetes.io/projected/fb1d5777-ea89-40cb-ae5c-dd8bde47f3de-kube-api-access-z66dj\") pod \"coredns-66bc5c9577-s8rrr\" (UID: \"fb1d5777-ea89-40cb-ae5c-dd8bde47f3de\") " pod="kube-system/coredns-66bc5c9577-s8rrr"
	Nov 26 20:22:06 embed-certs-949294 kubelet[1316]: I1126 20:22:06.128857    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ad12d5e5-d681-4dfc-9970-d2340ac55ed7-tmp\") pod \"storage-provisioner\" (UID: \"ad12d5e5-d681-4dfc-9970-d2340ac55ed7\") " pod="kube-system/storage-provisioner"
	Nov 26 20:22:06 embed-certs-949294 kubelet[1316]: I1126 20:22:06.128910    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgpxq\" (UniqueName: \"kubernetes.io/projected/ad12d5e5-d681-4dfc-9970-d2340ac55ed7-kube-api-access-dgpxq\") pod \"storage-provisioner\" (UID: \"ad12d5e5-d681-4dfc-9970-d2340ac55ed7\") " pod="kube-system/storage-provisioner"
	Nov 26 20:22:06 embed-certs-949294 kubelet[1316]: I1126 20:22:06.846752    1316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.846728715 podStartE2EDuration="12.846728715s" podCreationTimestamp="2025-11-26 20:21:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 20:22:06.846280412 +0000 UTC m=+18.343918318" watchObservedRunningTime="2025-11-26 20:22:06.846728715 +0000 UTC m=+18.344366622"
	Nov 26 20:22:08 embed-certs-949294 kubelet[1316]: I1126 20:22:08.975233    1316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-s8rrr" podStartSLOduration=14.975203263000001 podStartE2EDuration="14.975203263s" podCreationTimestamp="2025-11-26 20:21:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 20:22:06.911208733 +0000 UTC m=+18.408846643" watchObservedRunningTime="2025-11-26 20:22:08.975203263 +0000 UTC m=+20.472841169"
	Nov 26 20:22:09 embed-certs-949294 kubelet[1316]: I1126 20:22:09.049104    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-249kx\" (UniqueName: \"kubernetes.io/projected/bbd3f1ad-5639-44ac-bed1-8de1e6b81907-kube-api-access-249kx\") pod \"busybox\" (UID: \"bbd3f1ad-5639-44ac-bed1-8de1e6b81907\") " pod="default/busybox"
	Nov 26 20:22:11 embed-certs-949294 kubelet[1316]: I1126 20:22:11.780112    1316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=2.104550743 podStartE2EDuration="3.780087488s" podCreationTimestamp="2025-11-26 20:22:08 +0000 UTC" firstStartedPulling="2025-11-26 20:22:09.312680487 +0000 UTC m=+20.810318393" lastFinishedPulling="2025-11-26 20:22:10.98821725 +0000 UTC m=+22.485855138" observedRunningTime="2025-11-26 20:22:11.780081459 +0000 UTC m=+23.277719365" watchObservedRunningTime="2025-11-26 20:22:11.780087488 +0000 UTC m=+23.277725408"
	Nov 26 20:22:17 embed-certs-949294 kubelet[1316]: E1126 20:22:17.065847    1316 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:39078->127.0.0.1:36587: write tcp 127.0.0.1:39078->127.0.0.1:36587: write: broken pipe
	
	
	==> storage-provisioner [4ff0328054c3e027fae39a346f2343e0ced988a2267964255a324e857da13e8f] <==
	I1126 20:22:06.458442       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1126 20:22:06.470082       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1126 20:22:06.470153       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1126 20:22:06.472488       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:22:06.477359       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1126 20:22:06.477586       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1126 20:22:06.477662       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"05f20356-e266-4bee-9af8-d671ea0ca424", APIVersion:"v1", ResourceVersion:"412", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-949294_87595e57-13b6-4822-a1ed-5c11271f704f became leader
	I1126 20:22:06.477745       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-949294_87595e57-13b6-4822-a1ed-5c11271f704f!
	W1126 20:22:06.479763       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:22:06.484576       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1126 20:22:06.578925       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-949294_87595e57-13b6-4822-a1ed-5c11271f704f!
	W1126 20:22:08.487574       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:22:08.495167       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:22:10.499174       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:22:10.515950       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:22:12.518675       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:22:12.523124       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:22:14.526590       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:22:14.530158       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:22:16.533607       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:22:16.539016       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:22:18.543204       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:22:18.549347       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-949294 -n embed-certs-949294
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-949294 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-297942 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-297942 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (244.972538ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:22:36Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-297942 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-297942
helpers_test.go:243: (dbg) docker inspect newest-cni-297942:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "40b9f3c5f1a3b0585255460117f18a9671b74c031d814a5253a12e48d3850584",
	        "Created": "2025-11-26T20:22:12.162948812Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 273401,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-26T20:22:12.192738353Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/40b9f3c5f1a3b0585255460117f18a9671b74c031d814a5253a12e48d3850584/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/40b9f3c5f1a3b0585255460117f18a9671b74c031d814a5253a12e48d3850584/hostname",
	        "HostsPath": "/var/lib/docker/containers/40b9f3c5f1a3b0585255460117f18a9671b74c031d814a5253a12e48d3850584/hosts",
	        "LogPath": "/var/lib/docker/containers/40b9f3c5f1a3b0585255460117f18a9671b74c031d814a5253a12e48d3850584/40b9f3c5f1a3b0585255460117f18a9671b74c031d814a5253a12e48d3850584-json.log",
	        "Name": "/newest-cni-297942",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-297942:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-297942",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "40b9f3c5f1a3b0585255460117f18a9671b74c031d814a5253a12e48d3850584",
	                "LowerDir": "/var/lib/docker/overlay2/417273ddd8a42fbe19a864fe35ffc1bc9de0f153b520af6708ccf3f376e863fe-init/diff:/var/lib/docker/overlay2/1661d775281cb7314a654558ee2c4e5880ffafb3a27a085032449cd60a68753a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/417273ddd8a42fbe19a864fe35ffc1bc9de0f153b520af6708ccf3f376e863fe/merged",
	                "UpperDir": "/var/lib/docker/overlay2/417273ddd8a42fbe19a864fe35ffc1bc9de0f153b520af6708ccf3f376e863fe/diff",
	                "WorkDir": "/var/lib/docker/overlay2/417273ddd8a42fbe19a864fe35ffc1bc9de0f153b520af6708ccf3f376e863fe/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-297942",
	                "Source": "/var/lib/docker/volumes/newest-cni-297942/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-297942",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-297942",
	                "name.minikube.sigs.k8s.io": "newest-cni-297942",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "20ee487a09cbcdc20bd77c6befd6c3c2d2df02fb22238e8f223c37d1367cac57",
	            "SandboxKey": "/var/run/docker/netns/20ee487a09cb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33078"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33079"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-297942": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7a8acc179efb582b4f8ab1f8758542f842892d2dd2928aade1bbb97827e2c1af",
	                    "EndpointID": "767f960919ab5f8fb6dad2e8f142cc9a3b58b657b6dc6a3f41892083900b5a1b",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "4a:02:d8:63:15:09",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-297942",
	                        "40b9f3c5f1a3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-297942 -n newest-cni-297942
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-297942 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ stop    │ -p old-k8s-version-157431 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-157431       │ jenkins │ v1.37.0 │ 26 Nov 25 20:20 UTC │ 26 Nov 25 20:20 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-157431 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-157431       │ jenkins │ v1.37.0 │ 26 Nov 25 20:20 UTC │ 26 Nov 25 20:20 UTC │
	│ start   │ -p old-k8s-version-157431 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-157431       │ jenkins │ v1.37.0 │ 26 Nov 25 20:20 UTC │ 26 Nov 25 20:20 UTC │
	│ image   │ old-k8s-version-157431 image list --format=json                                                                                                                                                                                               │ old-k8s-version-157431       │ jenkins │ v1.37.0 │ 26 Nov 25 20:21 UTC │ 26 Nov 25 20:21 UTC │
	│ pause   │ -p old-k8s-version-157431 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-157431       │ jenkins │ v1.37.0 │ 26 Nov 25 20:21 UTC │                     │
	│ delete  │ -p old-k8s-version-157431                                                                                                                                                                                                                     │ old-k8s-version-157431       │ jenkins │ v1.37.0 │ 26 Nov 25 20:21 UTC │ 26 Nov 25 20:21 UTC │
	│ delete  │ -p old-k8s-version-157431                                                                                                                                                                                                                     │ old-k8s-version-157431       │ jenkins │ v1.37.0 │ 26 Nov 25 20:21 UTC │ 26 Nov 25 20:21 UTC │
	│ start   │ -p no-preload-026579 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-026579            │ jenkins │ v1.37.0 │ 26 Nov 25 20:21 UTC │ 26 Nov 25 20:22 UTC │
	│ start   │ -p cert-expiration-571738 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-571738       │ jenkins │ v1.37.0 │ 26 Nov 25 20:21 UTC │ 26 Nov 25 20:21 UTC │
	│ delete  │ -p cert-expiration-571738                                                                                                                                                                                                                     │ cert-expiration-571738       │ jenkins │ v1.37.0 │ 26 Nov 25 20:21 UTC │ 26 Nov 25 20:21 UTC │
	│ start   │ -p embed-certs-949294 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-949294           │ jenkins │ v1.37.0 │ 26 Nov 25 20:21 UTC │ 26 Nov 25 20:22 UTC │
	│ start   │ -p kubernetes-upgrade-225144 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-225144    │ jenkins │ v1.37.0 │ 26 Nov 25 20:21 UTC │                     │
	│ start   │ -p kubernetes-upgrade-225144 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-225144    │ jenkins │ v1.37.0 │ 26 Nov 25 20:21 UTC │ 26 Nov 25 20:22 UTC │
	│ delete  │ -p stopped-upgrade-211103                                                                                                                                                                                                                     │ stopped-upgrade-211103       │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ delete  │ -p kubernetes-upgrade-225144                                                                                                                                                                                                                  │ kubernetes-upgrade-225144    │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ delete  │ -p disable-driver-mounts-221304                                                                                                                                                                                                               │ disable-driver-mounts-221304 │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ start   │ -p default-k8s-diff-port-178152 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-178152 │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │                     │
	│ start   │ -p newest-cni-297942 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-297942            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ addons  │ enable metrics-server -p no-preload-026579 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-026579            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │                     │
	│ stop    │ -p no-preload-026579 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-026579            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ addons  │ enable metrics-server -p embed-certs-949294 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-949294           │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │                     │
	│ stop    │ -p embed-certs-949294 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-949294           │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-026579 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-026579            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ start   │ -p no-preload-026579 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-026579            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-297942 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-297942            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/26 20:22:31
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1126 20:22:31.719747  279050 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:22:31.719994  279050 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:22:31.720002  279050 out.go:374] Setting ErrFile to fd 2...
	I1126 20:22:31.720006  279050 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:22:31.720217  279050 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-10722/.minikube/bin
	I1126 20:22:31.720732  279050 out.go:368] Setting JSON to false
	I1126 20:22:31.721905  279050 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3902,"bootTime":1764184650,"procs":296,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1126 20:22:31.721973  279050 start.go:143] virtualization: kvm guest
	I1126 20:22:31.723966  279050 out.go:179] * [no-preload-026579] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1126 20:22:31.725221  279050 out.go:179]   - MINIKUBE_LOCATION=21974
	I1126 20:22:31.725265  279050 notify.go:221] Checking for updates...
	I1126 20:22:31.727164  279050 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1126 20:22:31.728430  279050 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21974-10722/kubeconfig
	I1126 20:22:31.730914  279050 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-10722/.minikube
	I1126 20:22:31.732553  279050 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1126 20:22:31.733506  279050 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1126 20:22:31.734922  279050 config.go:182] Loaded profile config "no-preload-026579": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:22:31.735656  279050 driver.go:422] Setting default libvirt URI to qemu:///system
	I1126 20:22:31.760880  279050 docker.go:124] docker version: linux-29.0.4:Docker Engine - Community
	I1126 20:22:31.760958  279050 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:22:31.825010  279050 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:72 OomKillDisable:false NGoroutines:80 SystemTime:2025-11-26 20:22:31.814283436 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1126 20:22:31.825151  279050 docker.go:319] overlay module found
	I1126 20:22:31.828307  279050 out.go:179] * Using the docker driver based on existing profile
	I1126 20:22:31.829506  279050 start.go:309] selected driver: docker
	I1126 20:22:31.829526  279050 start.go:927] validating driver "docker" against &{Name:no-preload-026579 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-026579 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:22:31.829633  279050 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1126 20:22:31.830297  279050 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:22:31.888436  279050 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:72 OomKillDisable:false NGoroutines:80 SystemTime:2025-11-26 20:22:31.878864916 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1126 20:22:31.888710  279050 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1126 20:22:31.888742  279050 cni.go:84] Creating CNI manager for ""
	I1126 20:22:31.888799  279050 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:22:31.888833  279050 start.go:353] cluster config:
	{Name:no-preload-026579 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-026579 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:22:31.890355  279050 out.go:179] * Starting "no-preload-026579" primary control-plane node in "no-preload-026579" cluster
	I1126 20:22:31.891484  279050 cache.go:134] Beginning downloading kic base image for docker with crio
	I1126 20:22:31.892559  279050 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1126 20:22:31.893628  279050 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:22:31.893722  279050 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1126 20:22:31.893745  279050 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/no-preload-026579/config.json ...
	I1126 20:22:31.893927  279050 cache.go:107] acquiring lock: {Name:mk05e35a48a31d1823f6307c80848de92380f364 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1126 20:22:31.893929  279050 cache.go:107] acquiring lock: {Name:mked4af5955747c44c4e4dc7ab5483a785188e9e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1126 20:22:31.893940  279050 cache.go:107] acquiring lock: {Name:mkde22fc4b1f300c54fa60fd3c7e1606cda8836c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1126 20:22:31.894022  279050 cache.go:115] /home/jenkins/minikube-integration/21974-10722/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1126 20:22:31.894025  279050 cache.go:115] /home/jenkins/minikube-integration/21974-10722/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1126 20:22:31.894035  279050 cache.go:115] /home/jenkins/minikube-integration/21974-10722/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1126 20:22:31.894037  279050 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21974-10722/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 125.07µs
	I1126 20:22:31.894039  279050 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21974-10722/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 132.824µs
	I1126 20:22:31.894046  279050 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21974-10722/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 115.59µs
	I1126 20:22:31.894054  279050 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21974-10722/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1126 20:22:31.894056  279050 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21974-10722/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1126 20:22:31.894057  279050 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21974-10722/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1126 20:22:31.894038  279050 cache.go:107] acquiring lock: {Name:mkedf39f98bd6a1022bdace96e5445eab87fbf37 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1126 20:22:31.894072  279050 cache.go:107] acquiring lock: {Name:mka9faf912a8ac0925a74b3c832db07986ce1f98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1126 20:22:31.894068  279050 cache.go:107] acquiring lock: {Name:mk431171e8dea0f10ad7dc0c19a6b4464183af93 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1126 20:22:31.894065  279050 cache.go:107] acquiring lock: {Name:mkba1ea8bbbff869b0eff9535d7b29abe4efd38b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1126 20:22:31.894096  279050 cache.go:107] acquiring lock: {Name:mk986f258f53b7e2b831bd1166e301d2039cbe38 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1126 20:22:31.894133  279050 cache.go:115] /home/jenkins/minikube-integration/21974-10722/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1126 20:22:31.894141  279050 cache.go:115] /home/jenkins/minikube-integration/21974-10722/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1126 20:22:31.894148  279050 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21974-10722/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 121.912µs
	I1126 20:22:31.894158  279050 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21974-10722/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1126 20:22:31.894158  279050 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21974-10722/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 87.975µs
	I1126 20:22:31.894170  279050 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21974-10722/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1126 20:22:31.894201  279050 cache.go:115] /home/jenkins/minikube-integration/21974-10722/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1126 20:22:31.894207  279050 cache.go:115] /home/jenkins/minikube-integration/21974-10722/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1126 20:22:31.894207  279050 cache.go:115] /home/jenkins/minikube-integration/21974-10722/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1126 20:22:31.894210  279050 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21974-10722/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 199.9µs
	I1126 20:22:31.894218  279050 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21974-10722/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 170.787µs
	I1126 20:22:31.894225  279050 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21974-10722/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1126 20:22:31.894222  279050 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21974-10722/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 203.184µs
	I1126 20:22:31.894228  279050 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21974-10722/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1126 20:22:31.894233  279050 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21974-10722/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1126 20:22:31.894241  279050 cache.go:87] Successfully saved all images to host disk.
	I1126 20:22:31.915306  279050 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1126 20:22:31.915324  279050 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1126 20:22:31.915337  279050 cache.go:243] Successfully downloaded all kic artifacts
	I1126 20:22:31.915363  279050 start.go:360] acquireMachinesLock for no-preload-026579: {Name:mkc9f7682dca497047729273172a2e8cbcb6b984 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1126 20:22:31.915416  279050 start.go:364] duration metric: took 35.82µs to acquireMachinesLock for "no-preload-026579"
	I1126 20:22:31.915432  279050 start.go:96] Skipping create...Using existing machine configuration
	I1126 20:22:31.915437  279050 fix.go:54] fixHost starting: 
	I1126 20:22:31.915674  279050 cli_runner.go:164] Run: docker container inspect no-preload-026579 --format={{.State.Status}}
	I1126 20:22:31.933177  279050 fix.go:112] recreateIfNeeded on no-preload-026579: state=Stopped err=<nil>
	W1126 20:22:31.933210  279050 fix.go:138] unexpected machine state, will restart: <nil>
	I1126 20:22:31.445522  271308 addons.go:530] duration metric: took 553.895746ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1126 20:22:31.746537  271308 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-178152" context rescaled to 1 replicas
	W1126 20:22:33.246377  271308 node_ready.go:57] node "default-k8s-diff-port-178152" has "Ready":"False" status (will retry)
	I1126 20:22:30.648991  271769 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1126 20:22:30.653546  271769 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1126 20:22:30.653564  271769 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1126 20:22:30.667026  271769 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1126 20:22:30.874309  271769 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1126 20:22:30.874398  271769 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:22:30.874435  271769 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-297942 minikube.k8s.io/updated_at=2025_11_26T20_22_30_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970 minikube.k8s.io/name=newest-cni-297942 minikube.k8s.io/primary=true
	I1126 20:22:30.884833  271769 ops.go:34] apiserver oom_adj: -16
	I1126 20:22:31.036036  271769 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:22:31.536843  271769 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:22:32.036927  271769 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:22:32.536417  271769 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:22:33.037012  271769 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:22:33.536187  271769 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:22:34.036891  271769 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:22:34.536939  271769 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:22:35.036140  271769 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:22:35.096691  271769 kubeadm.go:1114] duration metric: took 4.222358803s to wait for elevateKubeSystemPrivileges
	I1126 20:22:35.096722  271769 kubeadm.go:403] duration metric: took 15.798428264s to StartCluster
	I1126 20:22:35.096739  271769 settings.go:142] acquiring lock: {Name:mkeab3f1dbcfb88a16fda32bff13c520a9a811ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:22:35.096806  271769 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21974-10722/kubeconfig
	I1126 20:22:35.098250  271769 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/kubeconfig: {Name:mk6fc897a3190afd853d8f239c80394df10dbd39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:22:35.098498  271769 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1126 20:22:35.098504  271769 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1126 20:22:35.098576  271769 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1126 20:22:35.098683  271769 config.go:182] Loaded profile config "newest-cni-297942": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:22:35.098682  271769 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-297942"
	I1126 20:22:35.098692  271769 addons.go:70] Setting default-storageclass=true in profile "newest-cni-297942"
	I1126 20:22:35.098710  271769 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-297942"
	I1126 20:22:35.098753  271769 host.go:66] Checking if "newest-cni-297942" exists ...
	I1126 20:22:35.098711  271769 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-297942"
	I1126 20:22:35.099132  271769 cli_runner.go:164] Run: docker container inspect newest-cni-297942 --format={{.State.Status}}
	I1126 20:22:35.099281  271769 cli_runner.go:164] Run: docker container inspect newest-cni-297942 --format={{.State.Status}}
	I1126 20:22:35.100635  271769 out.go:179] * Verifying Kubernetes components...
	I1126 20:22:35.101739  271769 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:22:35.120864  271769 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1126 20:22:35.122064  271769 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:22:35.122085  271769 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1126 20:22:35.122100  271769 addons.go:239] Setting addon default-storageclass=true in "newest-cni-297942"
	I1126 20:22:35.122134  271769 host.go:66] Checking if "newest-cni-297942" exists ...
	I1126 20:22:35.122139  271769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-297942
	I1126 20:22:35.122616  271769 cli_runner.go:164] Run: docker container inspect newest-cni-297942 --format={{.State.Status}}
	I1126 20:22:35.150044  271769 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1126 20:22:35.150068  271769 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1126 20:22:35.150120  271769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-297942
	I1126 20:22:35.151166  271769 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/newest-cni-297942/id_rsa Username:docker}
	I1126 20:22:35.172651  271769 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/newest-cni-297942/id_rsa Username:docker}
	I1126 20:22:35.184904  271769 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1126 20:22:35.240582  271769 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:22:35.264643  271769 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:22:35.287319  271769 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1126 20:22:35.397938  271769 api_server.go:52] waiting for apiserver process to appear ...
	I1126 20:22:35.398009  271769 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:22:35.398236  271769 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1126 20:22:35.594319  271769 api_server.go:72] duration metric: took 495.780126ms to wait for apiserver process to appear ...
	I1126 20:22:35.594345  271769 api_server.go:88] waiting for apiserver healthz status ...
	I1126 20:22:35.594366  271769 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1126 20:22:35.599549  271769 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1126 20:22:35.600382  271769 api_server.go:141] control plane version: v1.34.1
	I1126 20:22:35.600406  271769 api_server.go:131] duration metric: took 6.054267ms to wait for apiserver health ...
	I1126 20:22:35.600414  271769 system_pods.go:43] waiting for kube-system pods to appear ...
	I1126 20:22:35.603029  271769 system_pods.go:59] 8 kube-system pods found
	I1126 20:22:35.603055  271769 system_pods.go:61] "coredns-66bc5c9577-bnszr" [ddf077eb-a9c4-42f2-a9b7-0aced551aa38] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1126 20:22:35.603063  271769 system_pods.go:61] "etcd-newest-cni-297942" [6520dcdd-9b71-4c83-8e54-7421dd7034af] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1126 20:22:35.603072  271769 system_pods.go:61] "kindnet-wlhp7" [a6a459a7-87d9-4628-ad09-7e6e8d8445da] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1126 20:22:35.603080  271769 system_pods.go:61] "kube-apiserver-newest-cni-297942" [7c910df8-6020-46fb-a380-09a0698b3720] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1126 20:22:35.603087  271769 system_pods.go:61] "kube-controller-manager-newest-cni-297942" [66f96670-85f0-47d1-859b-4844b80909d4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1126 20:22:35.603095  271769 system_pods.go:61] "kube-proxy-lx6vw" [6e8b3fed-5b44-42fd-9259-f59bfc5b1f0c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1126 20:22:35.603121  271769 system_pods.go:61] "kube-scheduler-newest-cni-297942" [4d59e692-80ac-4baa-9316-d8930f423531] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1126 20:22:35.603126  271769 system_pods.go:61] "storage-provisioner" [815d8b30-f9a4-4565-9f15-f45940446bd1] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1126 20:22:35.603140  271769 system_pods.go:74] duration metric: took 2.716032ms to wait for pod list to return data ...
	I1126 20:22:35.603146  271769 default_sa.go:34] waiting for default service account to be created ...
	I1126 20:22:35.603579  271769 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1126 20:22:35.604878  271769 addons.go:530] duration metric: took 506.302069ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1126 20:22:35.605314  271769 default_sa.go:45] found service account: "default"
	I1126 20:22:35.605335  271769 default_sa.go:55] duration metric: took 2.182999ms for default service account to be created ...
	I1126 20:22:35.605347  271769 kubeadm.go:587] duration metric: took 506.813319ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1126 20:22:35.605367  271769 node_conditions.go:102] verifying NodePressure condition ...
	I1126 20:22:35.607312  271769 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1126 20:22:35.607334  271769 node_conditions.go:123] node cpu capacity is 8
	I1126 20:22:35.607350  271769 node_conditions.go:105] duration metric: took 1.977639ms to run NodePressure ...
	I1126 20:22:35.607364  271769 start.go:242] waiting for startup goroutines ...
	I1126 20:22:35.903088  271769 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-297942" context rescaled to 1 replicas
	I1126 20:22:35.903127  271769 start.go:247] waiting for cluster config update ...
	I1126 20:22:35.903141  271769 start.go:256] writing updated cluster config ...
	I1126 20:22:35.903378  271769 ssh_runner.go:195] Run: rm -f paused
	I1126 20:22:35.958672  271769 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1126 20:22:35.959987  271769 out.go:179] * Done! kubectl is now configured to use "newest-cni-297942" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 26 20:22:35 newest-cni-297942 crio[774]: time="2025-11-26T20:22:35.302425373Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:22:35 newest-cni-297942 crio[774]: time="2025-11-26T20:22:35.303018128Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=dfcd889f-b8d9-45a4-a7e3-3ff7d45ecd32 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 26 20:22:35 newest-cni-297942 crio[774]: time="2025-11-26T20:22:35.305392577Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 26 20:22:35 newest-cni-297942 crio[774]: time="2025-11-26T20:22:35.306297505Z" level=info msg="Ran pod sandbox 61432167bf91ad991420e4644d7f2e6a226a9dff532c276dcec4068a23cc9d05 with infra container: kube-system/kube-proxy-lx6vw/POD" id=dfcd889f-b8d9-45a4-a7e3-3ff7d45ecd32 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 26 20:22:35 newest-cni-297942 crio[774]: time="2025-11-26T20:22:35.307478523Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=b2d2c6a3-564b-4c8a-89ea-658da13eb616 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:22:35 newest-cni-297942 crio[774]: time="2025-11-26T20:22:35.307618859Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=da4782c1-128a-4255-beca-c94e46f5472b name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 26 20:22:35 newest-cni-297942 crio[774]: time="2025-11-26T20:22:35.308791161Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=ad575c1c-6e43-4532-9397-5c21fab0de42 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:22:35 newest-cni-297942 crio[774]: time="2025-11-26T20:22:35.309988199Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 26 20:22:35 newest-cni-297942 crio[774]: time="2025-11-26T20:22:35.311818484Z" level=info msg="Ran pod sandbox 73717e352ac20e3c9578729ec1b4a12171cdc22e9783b9e9009e7bc27335cea4 with infra container: kube-system/kindnet-wlhp7/POD" id=da4782c1-128a-4255-beca-c94e46f5472b name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 26 20:22:35 newest-cni-297942 crio[774]: time="2025-11-26T20:22:35.312875282Z" level=info msg="Creating container: kube-system/kube-proxy-lx6vw/kube-proxy" id=6cf0e943-fd77-439b-aac5-8e09e3ab91af name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:22:35 newest-cni-297942 crio[774]: time="2025-11-26T20:22:35.313004332Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:22:35 newest-cni-297942 crio[774]: time="2025-11-26T20:22:35.314574849Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=43243f09-4075-413c-84b1-65d9670de7ca name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:22:35 newest-cni-297942 crio[774]: time="2025-11-26T20:22:35.316296722Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=a150e8f0-1a34-48f8-af65-94fdb055b922 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:22:35 newest-cni-297942 crio[774]: time="2025-11-26T20:22:35.318951199Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:22:35 newest-cni-297942 crio[774]: time="2025-11-26T20:22:35.31918536Z" level=info msg="Creating container: kube-system/kindnet-wlhp7/kindnet-cni" id=e04d4f83-add7-4776-bc34-f6e513e5f3ea name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:22:35 newest-cni-297942 crio[774]: time="2025-11-26T20:22:35.319271174Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:22:35 newest-cni-297942 crio[774]: time="2025-11-26T20:22:35.319522724Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:22:35 newest-cni-297942 crio[774]: time="2025-11-26T20:22:35.323621304Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:22:35 newest-cni-297942 crio[774]: time="2025-11-26T20:22:35.32424484Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:22:35 newest-cni-297942 crio[774]: time="2025-11-26T20:22:35.355115916Z" level=info msg="Created container 0b0ce4c9a268302f0cc6368536609335af4ff058e367224a7549f702309e690a: kube-system/kindnet-wlhp7/kindnet-cni" id=e04d4f83-add7-4776-bc34-f6e513e5f3ea name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:22:35 newest-cni-297942 crio[774]: time="2025-11-26T20:22:35.356339895Z" level=info msg="Starting container: 0b0ce4c9a268302f0cc6368536609335af4ff058e367224a7549f702309e690a" id=77303383-3151-4aed-a0ca-85e51c92880e name=/runtime.v1.RuntimeService/StartContainer
	Nov 26 20:22:35 newest-cni-297942 crio[774]: time="2025-11-26T20:22:35.359377291Z" level=info msg="Started container" PID=1579 containerID=0b0ce4c9a268302f0cc6368536609335af4ff058e367224a7549f702309e690a description=kube-system/kindnet-wlhp7/kindnet-cni id=77303383-3151-4aed-a0ca-85e51c92880e name=/runtime.v1.RuntimeService/StartContainer sandboxID=73717e352ac20e3c9578729ec1b4a12171cdc22e9783b9e9009e7bc27335cea4
	Nov 26 20:22:35 newest-cni-297942 crio[774]: time="2025-11-26T20:22:35.368422981Z" level=info msg="Created container 9d0e98987a6402252ff11908015430271dd2873b08a175df6111e2208fe0879e: kube-system/kube-proxy-lx6vw/kube-proxy" id=6cf0e943-fd77-439b-aac5-8e09e3ab91af name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:22:35 newest-cni-297942 crio[774]: time="2025-11-26T20:22:35.369628645Z" level=info msg="Starting container: 9d0e98987a6402252ff11908015430271dd2873b08a175df6111e2208fe0879e" id=ff28056f-2f6a-4107-93da-235ab2b04ba9 name=/runtime.v1.RuntimeService/StartContainer
	Nov 26 20:22:35 newest-cni-297942 crio[774]: time="2025-11-26T20:22:35.380654715Z" level=info msg="Started container" PID=1578 containerID=9d0e98987a6402252ff11908015430271dd2873b08a175df6111e2208fe0879e description=kube-system/kube-proxy-lx6vw/kube-proxy id=ff28056f-2f6a-4107-93da-235ab2b04ba9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=61432167bf91ad991420e4644d7f2e6a226a9dff532c276dcec4068a23cc9d05
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	0b0ce4c9a2683       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   1 second ago        Running             kindnet-cni               0                   73717e352ac20       kindnet-wlhp7                               kube-system
	9d0e98987a640       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   1 second ago        Running             kube-proxy                0                   61432167bf91a       kube-proxy-lx6vw                            kube-system
	af08d6bef8812       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   11 seconds ago      Running             kube-controller-manager   0                   91f6fb5e76c8e       kube-controller-manager-newest-cni-297942   kube-system
	65569b4be9702       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   11 seconds ago      Running             kube-apiserver            0                   a4905f5a1cb02       kube-apiserver-newest-cni-297942            kube-system
	d993adb4b72d6       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   11 seconds ago      Running             etcd                      0                   0e1d3e34d14ff       etcd-newest-cni-297942                      kube-system
	73107a032ffba       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   11 seconds ago      Running             kube-scheduler            0                   f24ddec0d7839       kube-scheduler-newest-cni-297942            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-297942
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-297942
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970
	                    minikube.k8s.io/name=newest-cni-297942
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_26T20_22_30_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 26 Nov 2025 20:22:27 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-297942
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 26 Nov 2025 20:22:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 26 Nov 2025 20:22:30 +0000   Wed, 26 Nov 2025 20:22:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 26 Nov 2025 20:22:30 +0000   Wed, 26 Nov 2025 20:22:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 26 Nov 2025 20:22:30 +0000   Wed, 26 Nov 2025 20:22:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 26 Nov 2025 20:22:30 +0000   Wed, 26 Nov 2025 20:22:25 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    newest-cni-297942
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                8cbd9667-abfd-484d-8f07-0a0070bb411f
	  Boot ID:                    4ab83601-3d95-4490-96bc-14b416ba2714
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-297942                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8s
	  kube-system                 kindnet-wlhp7                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      3s
	  kube-system                 kube-apiserver-newest-cni-297942             250m (3%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-controller-manager-newest-cni-297942    200m (2%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-proxy-lx6vw                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 kube-scheduler-newest-cni-297942             100m (1%)     0 (0%)      0 (0%)           0 (0%)         9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 1s    kube-proxy       
	  Normal  Starting                 8s    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s    kubelet          Node newest-cni-297942 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s    kubelet          Node newest-cni-297942 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s    kubelet          Node newest-cni-297942 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3s    node-controller  Node newest-cni-297942 event: Registered Node newest-cni-297942 in Controller
	
	
	==> dmesg <==
	[  +0.091832] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023778] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.010162] kauditd_printk_skb: 47 callbacks suppressed
	[Nov26 19:37] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.011177] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.022896] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.024869] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.022915] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.023864] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +2.047793] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +4.032568] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +8.126198] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[ +16.382331] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[Nov26 19:38] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	
	
	==> etcd [d993adb4b72d6124858d31d74ef3ccf8d85afd7a673f7aa5dab04dfc70fc099a] <==
	{"level":"warn","ts":"2025-11-26T20:22:26.639764Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:26.649401Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:26.656089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:26.663388Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:26.671149Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:26.678388Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:26.685577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:26.692831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:26.700741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:26.711599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:26.718241Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:26.725472Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:26.732199Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:26.738976Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:26.748097Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:26.755162Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:26.762879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:26.770612Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:26.778032Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:26.785583Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:26.794795Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:26.809434Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:26.816876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:26.824311Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:26.892490Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40256","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:22:37 up  1:05,  0 user,  load average: 3.46, 3.05, 2.02
	Linux newest-cni-297942 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0b0ce4c9a268302f0cc6368536609335af4ff058e367224a7549f702309e690a] <==
	I1126 20:22:35.526166       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1126 20:22:35.526482       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1126 20:22:35.526630       1 main.go:148] setting mtu 1500 for CNI 
	I1126 20:22:35.526652       1 main.go:178] kindnetd IP family: "ipv4"
	I1126 20:22:35.526681       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-26T20:22:35Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1126 20:22:35.819252       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1126 20:22:35.819297       1 controller.go:381] "Waiting for informer caches to sync"
	I1126 20:22:35.819309       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1126 20:22:35.819450       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1126 20:22:36.019410       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1126 20:22:36.019440       1 metrics.go:72] Registering metrics
	I1126 20:22:36.019530       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [65569b4be97022e4694ce529e1a7cb98e5f2e84aba25ab7c423d5b050c0d47ba] <==
	I1126 20:22:27.364529       1 cache.go:39] Caches are synced for autoregister controller
	I1126 20:22:27.365470       1 controller.go:667] quota admission added evaluator for: namespaces
	I1126 20:22:27.370253       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1126 20:22:27.376195       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1126 20:22:27.379656       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1126 20:22:27.379721       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1126 20:22:27.384383       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1126 20:22:27.385948       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1126 20:22:28.268058       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1126 20:22:28.271749       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1126 20:22:28.271765       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1126 20:22:28.678905       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1126 20:22:28.710764       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1126 20:22:28.772003       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1126 20:22:28.776904       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1126 20:22:28.777621       1 controller.go:667] quota admission added evaluator for: endpoints
	I1126 20:22:28.781723       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1126 20:22:29.281333       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1126 20:22:30.048480       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1126 20:22:30.056289       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1126 20:22:30.061627       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1126 20:22:34.981366       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1126 20:22:35.286990       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1126 20:22:35.291016       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1126 20:22:35.335883       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [af08d6bef88127bef6d0ea4c4b5d2320c55c4c40d8bd68588f4ab9179577436e] <==
	I1126 20:22:34.278873       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1126 20:22:34.278887       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1126 20:22:34.278892       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1126 20:22:34.279021       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1126 20:22:34.279289       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1126 20:22:34.280159       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1126 20:22:34.280188       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1126 20:22:34.280235       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1126 20:22:34.280246       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1126 20:22:34.280236       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1126 20:22:34.280668       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1126 20:22:34.281747       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1126 20:22:34.283923       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1126 20:22:34.283964       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1126 20:22:34.286244       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1126 20:22:34.286286       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1126 20:22:34.286296       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1126 20:22:34.286322       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1126 20:22:34.286330       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1126 20:22:34.286335       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1126 20:22:34.287612       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1126 20:22:34.290877       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1126 20:22:34.291685       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="newest-cni-297942" podCIDRs=["10.42.0.0/24"]
	I1126 20:22:34.291767       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1126 20:22:34.294951       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [9d0e98987a6402252ff11908015430271dd2873b08a175df6111e2208fe0879e] <==
	I1126 20:22:35.431116       1 server_linux.go:53] "Using iptables proxy"
	I1126 20:22:35.500490       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1126 20:22:35.601219       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1126 20:22:35.601263       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1126 20:22:35.601374       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1126 20:22:35.621689       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1126 20:22:35.621735       1 server_linux.go:132] "Using iptables Proxier"
	I1126 20:22:35.626658       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1126 20:22:35.627766       1 server.go:527] "Version info" version="v1.34.1"
	I1126 20:22:35.627831       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 20:22:35.630296       1 config.go:106] "Starting endpoint slice config controller"
	I1126 20:22:35.630368       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1126 20:22:35.630299       1 config.go:200] "Starting service config controller"
	I1126 20:22:35.630485       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1126 20:22:35.630371       1 config.go:309] "Starting node config controller"
	I1126 20:22:35.630553       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1126 20:22:35.630579       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1126 20:22:35.630320       1 config.go:403] "Starting serviceCIDR config controller"
	I1126 20:22:35.630626       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1126 20:22:35.730686       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1126 20:22:35.730719       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1126 20:22:35.730801       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [73107a032ffbaceed8dd66c76453e48d187950f712f73eee36b32d6fbe80d964] <==
	E1126 20:22:27.320576       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1126 20:22:27.320595       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1126 20:22:27.320603       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1126 20:22:27.320693       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1126 20:22:27.320730       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1126 20:22:27.320792       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1126 20:22:27.320807       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1126 20:22:27.320820       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1126 20:22:27.320866       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1126 20:22:27.320886       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1126 20:22:27.320939       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1126 20:22:27.320947       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1126 20:22:27.320993       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1126 20:22:27.321002       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1126 20:22:27.321035       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1126 20:22:27.321091       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1126 20:22:28.199745       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1126 20:22:28.206712       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1126 20:22:28.223095       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1126 20:22:28.249144       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1126 20:22:28.273122       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1126 20:22:28.342136       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1126 20:22:28.356328       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1126 20:22:28.484850       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1126 20:22:30.713598       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 26 20:22:30 newest-cni-297942 kubelet[1304]: I1126 20:22:30.064409    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2d575b3d33379e78b6aeafdb9ac5ffa7-usr-share-ca-certificates\") pod \"kube-apiserver-newest-cni-297942\" (UID: \"2d575b3d33379e78b6aeafdb9ac5ffa7\") " pod="kube-system/kube-apiserver-newest-cni-297942"
	Nov 26 20:22:30 newest-cni-297942 kubelet[1304]: I1126 20:22:30.064441    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2d575b3d33379e78b6aeafdb9ac5ffa7-usr-local-share-ca-certificates\") pod \"kube-apiserver-newest-cni-297942\" (UID: \"2d575b3d33379e78b6aeafdb9ac5ffa7\") " pod="kube-system/kube-apiserver-newest-cni-297942"
	Nov 26 20:22:30 newest-cni-297942 kubelet[1304]: I1126 20:22:30.064516    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a88473b31b972d8f20cec17519e3391-k8s-certs\") pod \"kube-controller-manager-newest-cni-297942\" (UID: \"8a88473b31b972d8f20cec17519e3391\") " pod="kube-system/kube-controller-manager-newest-cni-297942"
	Nov 26 20:22:30 newest-cni-297942 kubelet[1304]: I1126 20:22:30.849136    1304 apiserver.go:52] "Watching apiserver"
	Nov 26 20:22:30 newest-cni-297942 kubelet[1304]: I1126 20:22:30.862306    1304 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 26 20:22:30 newest-cni-297942 kubelet[1304]: I1126 20:22:30.896748    1304 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-297942"
	Nov 26 20:22:30 newest-cni-297942 kubelet[1304]: I1126 20:22:30.896942    1304 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-297942"
	Nov 26 20:22:30 newest-cni-297942 kubelet[1304]: E1126 20:22:30.911292    1304 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-297942\" already exists" pod="kube-system/kube-apiserver-newest-cni-297942"
	Nov 26 20:22:30 newest-cni-297942 kubelet[1304]: E1126 20:22:30.912719    1304 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-297942\" already exists" pod="kube-system/kube-controller-manager-newest-cni-297942"
	Nov 26 20:22:30 newest-cni-297942 kubelet[1304]: I1126 20:22:30.977699    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-297942" podStartSLOduration=1.977652519 podStartE2EDuration="1.977652519s" podCreationTimestamp="2025-11-26 20:22:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 20:22:30.976914699 +0000 UTC m=+1.187755610" watchObservedRunningTime="2025-11-26 20:22:30.977652519 +0000 UTC m=+1.188493402"
	Nov 26 20:22:30 newest-cni-297942 kubelet[1304]: I1126 20:22:30.978178    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-297942" podStartSLOduration=1.978142916 podStartE2EDuration="1.978142916s" podCreationTimestamp="2025-11-26 20:22:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 20:22:30.947705515 +0000 UTC m=+1.158546398" watchObservedRunningTime="2025-11-26 20:22:30.978142916 +0000 UTC m=+1.188983800"
	Nov 26 20:22:31 newest-cni-297942 kubelet[1304]: I1126 20:22:31.011429    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-297942" podStartSLOduration=3.011407434 podStartE2EDuration="3.011407434s" podCreationTimestamp="2025-11-26 20:22:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 20:22:31.011294955 +0000 UTC m=+1.222135855" watchObservedRunningTime="2025-11-26 20:22:31.011407434 +0000 UTC m=+1.222248317"
	Nov 26 20:22:31 newest-cni-297942 kubelet[1304]: I1126 20:22:31.011702    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-297942" podStartSLOduration=2.011689139 podStartE2EDuration="2.011689139s" podCreationTimestamp="2025-11-26 20:22:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 20:22:30.997943626 +0000 UTC m=+1.208784523" watchObservedRunningTime="2025-11-26 20:22:31.011689139 +0000 UTC m=+1.222530041"
	Nov 26 20:22:34 newest-cni-297942 kubelet[1304]: I1126 20:22:34.307396    1304 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 26 20:22:34 newest-cni-297942 kubelet[1304]: I1126 20:22:34.308011    1304 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 26 20:22:35 newest-cni-297942 kubelet[1304]: I1126 20:22:35.097887    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6e8b3fed-5b44-42fd-9259-f59bfc5b1f0c-xtables-lock\") pod \"kube-proxy-lx6vw\" (UID: \"6e8b3fed-5b44-42fd-9259-f59bfc5b1f0c\") " pod="kube-system/kube-proxy-lx6vw"
	Nov 26 20:22:35 newest-cni-297942 kubelet[1304]: I1126 20:22:35.097932    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/a6a459a7-87d9-4628-ad09-7e6e8d8445da-cni-cfg\") pod \"kindnet-wlhp7\" (UID: \"a6a459a7-87d9-4628-ad09-7e6e8d8445da\") " pod="kube-system/kindnet-wlhp7"
	Nov 26 20:22:35 newest-cni-297942 kubelet[1304]: I1126 20:22:35.097960    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a6a459a7-87d9-4628-ad09-7e6e8d8445da-lib-modules\") pod \"kindnet-wlhp7\" (UID: \"a6a459a7-87d9-4628-ad09-7e6e8d8445da\") " pod="kube-system/kindnet-wlhp7"
	Nov 26 20:22:35 newest-cni-297942 kubelet[1304]: I1126 20:22:35.098040    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6ptm\" (UniqueName: \"kubernetes.io/projected/6e8b3fed-5b44-42fd-9259-f59bfc5b1f0c-kube-api-access-t6ptm\") pod \"kube-proxy-lx6vw\" (UID: \"6e8b3fed-5b44-42fd-9259-f59bfc5b1f0c\") " pod="kube-system/kube-proxy-lx6vw"
	Nov 26 20:22:35 newest-cni-297942 kubelet[1304]: I1126 20:22:35.098080    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nbhh\" (UniqueName: \"kubernetes.io/projected/a6a459a7-87d9-4628-ad09-7e6e8d8445da-kube-api-access-9nbhh\") pod \"kindnet-wlhp7\" (UID: \"a6a459a7-87d9-4628-ad09-7e6e8d8445da\") " pod="kube-system/kindnet-wlhp7"
	Nov 26 20:22:35 newest-cni-297942 kubelet[1304]: I1126 20:22:35.098122    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a6a459a7-87d9-4628-ad09-7e6e8d8445da-xtables-lock\") pod \"kindnet-wlhp7\" (UID: \"a6a459a7-87d9-4628-ad09-7e6e8d8445da\") " pod="kube-system/kindnet-wlhp7"
	Nov 26 20:22:35 newest-cni-297942 kubelet[1304]: I1126 20:22:35.098179    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e8b3fed-5b44-42fd-9259-f59bfc5b1f0c-lib-modules\") pod \"kube-proxy-lx6vw\" (UID: \"6e8b3fed-5b44-42fd-9259-f59bfc5b1f0c\") " pod="kube-system/kube-proxy-lx6vw"
	Nov 26 20:22:35 newest-cni-297942 kubelet[1304]: I1126 20:22:35.098243    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6e8b3fed-5b44-42fd-9259-f59bfc5b1f0c-kube-proxy\") pod \"kube-proxy-lx6vw\" (UID: \"6e8b3fed-5b44-42fd-9259-f59bfc5b1f0c\") " pod="kube-system/kube-proxy-lx6vw"
	Nov 26 20:22:35 newest-cni-297942 kubelet[1304]: I1126 20:22:35.917026    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-wlhp7" podStartSLOduration=1.917007569 podStartE2EDuration="1.917007569s" podCreationTimestamp="2025-11-26 20:22:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 20:22:35.916815573 +0000 UTC m=+6.127656468" watchObservedRunningTime="2025-11-26 20:22:35.917007569 +0000 UTC m=+6.127848454"
	Nov 26 20:22:37 newest-cni-297942 kubelet[1304]: I1126 20:22:37.163972    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lx6vw" podStartSLOduration=3.163954503 podStartE2EDuration="3.163954503s" podCreationTimestamp="2025-11-26 20:22:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 20:22:35.92802784 +0000 UTC m=+6.138868739" watchObservedRunningTime="2025-11-26 20:22:37.163954503 +0000 UTC m=+7.374795385"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-297942 -n newest-cni-297942
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-297942 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-bnszr storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-297942 describe pod coredns-66bc5c9577-bnszr storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-297942 describe pod coredns-66bc5c9577-bnszr storage-provisioner: exit status 1 (79.660402ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-bnszr" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-297942 describe pod coredns-66bc5c9577-bnszr storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-178152 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-178152 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (306.144538ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:22:52Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-178152 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-178152 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-178152 describe deploy/metrics-server -n kube-system: exit status 1 (68.643769ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-178152 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-178152
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-178152:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1da700037b3cbba2b479732a8cf72dcd22801db8429cb9a5806c239b30001370",
	        "Created": "2025-11-26T20:22:08.62900996Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 272566,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-26T20:22:08.679716219Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/1da700037b3cbba2b479732a8cf72dcd22801db8429cb9a5806c239b30001370/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1da700037b3cbba2b479732a8cf72dcd22801db8429cb9a5806c239b30001370/hostname",
	        "HostsPath": "/var/lib/docker/containers/1da700037b3cbba2b479732a8cf72dcd22801db8429cb9a5806c239b30001370/hosts",
	        "LogPath": "/var/lib/docker/containers/1da700037b3cbba2b479732a8cf72dcd22801db8429cb9a5806c239b30001370/1da700037b3cbba2b479732a8cf72dcd22801db8429cb9a5806c239b30001370-json.log",
	        "Name": "/default-k8s-diff-port-178152",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-178152:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-178152",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1da700037b3cbba2b479732a8cf72dcd22801db8429cb9a5806c239b30001370",
	                "LowerDir": "/var/lib/docker/overlay2/1cbf40eef7d3ec80e807f2ef0c1010ea3cdc29bbb3549f4388d400ee4fc8f988-init/diff:/var/lib/docker/overlay2/1661d775281cb7314a654558ee2c4e5880ffafb3a27a085032449cd60a68753a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1cbf40eef7d3ec80e807f2ef0c1010ea3cdc29bbb3549f4388d400ee4fc8f988/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1cbf40eef7d3ec80e807f2ef0c1010ea3cdc29bbb3549f4388d400ee4fc8f988/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1cbf40eef7d3ec80e807f2ef0c1010ea3cdc29bbb3549f4388d400ee4fc8f988/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-178152",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-178152/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-178152",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-178152",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-178152",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "cdcd5f73f8706465db36ef6ff0dc6f5c12cdcf39220f8a873df0d4ed0130bf39",
	            "SandboxKey": "/var/run/docker/netns/cdcd5f73f870",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33077"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33075"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33076"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-178152": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ec68256d41186ab4784970795756969f4ed3452c84879229e3a4f0a4adc0c9b1",
	                    "EndpointID": "8e8fe76aa9d9474525b5cb4def8ed51b75f604efbed20738551e3076dc2e1e3e",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "8a:2d:f4:16:5d:dd",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-178152",
	                        "1da700037b3c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-178152 -n default-k8s-diff-port-178152
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-178152 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-178152 logs -n 25: (1.161213507s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p old-k8s-version-157431                                                                                                                                                                                                                     │ old-k8s-version-157431       │ jenkins │ v1.37.0 │ 26 Nov 25 20:21 UTC │ 26 Nov 25 20:21 UTC │
	│ start   │ -p no-preload-026579 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-026579            │ jenkins │ v1.37.0 │ 26 Nov 25 20:21 UTC │ 26 Nov 25 20:22 UTC │
	│ start   │ -p cert-expiration-571738 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-571738       │ jenkins │ v1.37.0 │ 26 Nov 25 20:21 UTC │ 26 Nov 25 20:21 UTC │
	│ delete  │ -p cert-expiration-571738                                                                                                                                                                                                                     │ cert-expiration-571738       │ jenkins │ v1.37.0 │ 26 Nov 25 20:21 UTC │ 26 Nov 25 20:21 UTC │
	│ start   │ -p embed-certs-949294 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-949294           │ jenkins │ v1.37.0 │ 26 Nov 25 20:21 UTC │ 26 Nov 25 20:22 UTC │
	│ start   │ -p kubernetes-upgrade-225144 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-225144    │ jenkins │ v1.37.0 │ 26 Nov 25 20:21 UTC │                     │
	│ start   │ -p kubernetes-upgrade-225144 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-225144    │ jenkins │ v1.37.0 │ 26 Nov 25 20:21 UTC │ 26 Nov 25 20:22 UTC │
	│ delete  │ -p stopped-upgrade-211103                                                                                                                                                                                                                     │ stopped-upgrade-211103       │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ delete  │ -p kubernetes-upgrade-225144                                                                                                                                                                                                                  │ kubernetes-upgrade-225144    │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ delete  │ -p disable-driver-mounts-221304                                                                                                                                                                                                               │ disable-driver-mounts-221304 │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ start   │ -p default-k8s-diff-port-178152 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-178152 │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ start   │ -p newest-cni-297942 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-297942            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ addons  │ enable metrics-server -p no-preload-026579 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-026579            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │                     │
	│ stop    │ -p no-preload-026579 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-026579            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ addons  │ enable metrics-server -p embed-certs-949294 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-949294           │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │                     │
	│ stop    │ -p embed-certs-949294 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-949294           │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ addons  │ enable dashboard -p no-preload-026579 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-026579            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ start   │ -p no-preload-026579 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-026579            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-297942 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-297942            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-949294 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-949294           │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ start   │ -p embed-certs-949294 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-949294           │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │                     │
	│ stop    │ -p newest-cni-297942 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-297942            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ addons  │ enable dashboard -p newest-cni-297942 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-297942            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ start   │ -p newest-cni-297942 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-297942            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-178152 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-178152 │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/26 20:22:40
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1126 20:22:40.884483  283132 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:22:40.884766  283132 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:22:40.884776  283132 out.go:374] Setting ErrFile to fd 2...
	I1126 20:22:40.884785  283132 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:22:40.884987  283132 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-10722/.minikube/bin
	I1126 20:22:40.885440  283132 out.go:368] Setting JSON to false
	I1126 20:22:40.886566  283132 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3911,"bootTime":1764184650,"procs":316,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1126 20:22:40.886632  283132 start.go:143] virtualization: kvm guest
	I1126 20:22:40.888379  283132 out.go:179] * [newest-cni-297942] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1126 20:22:40.889473  283132 out.go:179]   - MINIKUBE_LOCATION=21974
	I1126 20:22:40.889503  283132 notify.go:221] Checking for updates...
	I1126 20:22:40.892833  283132 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1126 20:22:40.894800  283132 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21974-10722/kubeconfig
	I1126 20:22:40.896376  283132 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-10722/.minikube
	I1126 20:22:40.897743  283132 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1126 20:22:40.898713  283132 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1126 20:22:40.900231  283132 config.go:182] Loaded profile config "newest-cni-297942": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:22:40.900958  283132 driver.go:422] Setting default libvirt URI to qemu:///system
	I1126 20:22:40.928114  283132 docker.go:124] docker version: linux-29.0.4:Docker Engine - Community
	I1126 20:22:40.928202  283132 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:22:41.015656  283132 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:74 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-26 20:22:41.003539781 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1126 20:22:41.015804  283132 docker.go:319] overlay module found
	I1126 20:22:41.016948  283132 out.go:179] * Using the docker driver based on existing profile
	I1126 20:22:41.017883  283132 start.go:309] selected driver: docker
	I1126 20:22:41.017898  283132 start.go:927] validating driver "docker" against &{Name:newest-cni-297942 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-297942 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:22:41.018002  283132 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1126 20:22:41.018724  283132 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:22:41.084121  283132 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:74 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-26 20:22:41.072667777 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1126 20:22:41.084507  283132 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1126 20:22:41.084546  283132 cni.go:84] Creating CNI manager for ""
	I1126 20:22:41.084623  283132 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:22:41.084677  283132 start.go:353] cluster config:
	{Name:newest-cni-297942 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-297942 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:22:41.086652  283132 out.go:179] * Starting "newest-cni-297942" primary control-plane node in "newest-cni-297942" cluster
	I1126 20:22:41.087583  283132 cache.go:134] Beginning downloading kic base image for docker with crio
	I1126 20:22:41.088592  283132 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1126 20:22:41.089520  283132 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:22:41.089554  283132 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21974-10722/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1126 20:22:41.089569  283132 cache.go:65] Caching tarball of preloaded images
	I1126 20:22:41.089623  283132 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1126 20:22:41.089678  283132 preload.go:238] Found /home/jenkins/minikube-integration/21974-10722/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1126 20:22:41.089692  283132 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1126 20:22:41.089796  283132 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/newest-cni-297942/config.json ...
	I1126 20:22:41.111178  283132 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1126 20:22:41.111197  283132 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1126 20:22:41.111211  283132 cache.go:243] Successfully downloaded all kic artifacts
	I1126 20:22:41.111242  283132 start.go:360] acquireMachinesLock for newest-cni-297942: {Name:mkec4aea2213ece57272965b7ad56143d17ef93a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1126 20:22:41.111305  283132 start.go:364] duration metric: took 40.156µs to acquireMachinesLock for "newest-cni-297942"
	I1126 20:22:41.111323  283132 start.go:96] Skipping create...Using existing machine configuration
	I1126 20:22:41.111333  283132 fix.go:54] fixHost starting: 
	I1126 20:22:41.111591  283132 cli_runner.go:164] Run: docker container inspect newest-cni-297942 --format={{.State.Status}}
	I1126 20:22:41.129559  283132 fix.go:112] recreateIfNeeded on newest-cni-297942: state=Stopped err=<nil>
	W1126 20:22:41.129580  283132 fix.go:138] unexpected machine state, will restart: <nil>
	I1126 20:22:39.153389  279050 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:22:39.153408  279050 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1126 20:22:39.153478  279050 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-026579
	I1126 20:22:39.179591  279050 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/no-preload-026579/id_rsa Username:docker}
	I1126 20:22:39.180647  279050 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1126 20:22:39.180665  279050 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1126 20:22:39.180721  279050 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-026579
	I1126 20:22:39.186501  279050 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/no-preload-026579/id_rsa Username:docker}
	I1126 20:22:39.209307  279050 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/no-preload-026579/id_rsa Username:docker}
	I1126 20:22:39.273960  279050 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:22:39.287389  279050 node_ready.go:35] waiting up to 6m0s for node "no-preload-026579" to be "Ready" ...
	I1126 20:22:39.298799  279050 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:22:39.300737  279050 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1126 20:22:39.300753  279050 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1126 20:22:39.315410  279050 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1126 20:22:39.315430  279050 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1126 20:22:39.325338  279050 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1126 20:22:39.331515  279050 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1126 20:22:39.331534  279050 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1126 20:22:39.348174  279050 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1126 20:22:39.348194  279050 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1126 20:22:39.369916  279050 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1126 20:22:39.369951  279050 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1126 20:22:39.385646  279050 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1126 20:22:39.385669  279050 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1126 20:22:39.400976  279050 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1126 20:22:39.401000  279050 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1126 20:22:39.416714  279050 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1126 20:22:39.416732  279050 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1126 20:22:39.433038  279050 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1126 20:22:39.433061  279050 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1126 20:22:39.447449  279050 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1126 20:22:40.587082  279050 node_ready.go:49] node "no-preload-026579" is "Ready"
	I1126 20:22:40.587113  279050 node_ready.go:38] duration metric: took 1.299680318s for node "no-preload-026579" to be "Ready" ...
	I1126 20:22:40.587129  279050 api_server.go:52] waiting for apiserver process to appear ...
	I1126 20:22:40.587180  279050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:22:41.152424  279050 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.853595012s)
	I1126 20:22:41.152577  279050 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.827200281s)
	I1126 20:22:41.152686  279050 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.705180849s)
	I1126 20:22:41.152711  279050 api_server.go:72] duration metric: took 2.029918005s to wait for apiserver process to appear ...
	I1126 20:22:41.152721  279050 api_server.go:88] waiting for apiserver healthz status ...
	I1126 20:22:41.152742  279050 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1126 20:22:41.156567  279050 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-026579 addons enable metrics-server
	
	I1126 20:22:41.157768  279050 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1126 20:22:41.157789  279050 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1126 20:22:41.157819  279050 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1126 20:22:41.159540  279050 addons.go:530] duration metric: took 2.036715336s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1126 20:22:41.653489  279050 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1126 20:22:41.658910  279050 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1126 20:22:41.658967  279050 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1126 20:22:37.813684  281230 out.go:252] * Restarting existing docker container for "embed-certs-949294" ...
	I1126 20:22:37.813768  281230 cli_runner.go:164] Run: docker start embed-certs-949294
	I1126 20:22:38.131293  281230 cli_runner.go:164] Run: docker container inspect embed-certs-949294 --format={{.State.Status}}
	I1126 20:22:38.152794  281230 kic.go:430] container "embed-certs-949294" state is running.
	I1126 20:22:38.153224  281230 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-949294
	I1126 20:22:38.175166  281230 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/embed-certs-949294/config.json ...
	I1126 20:22:38.175388  281230 machine.go:94] provisionDockerMachine start ...
	I1126 20:22:38.175448  281230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-949294
	I1126 20:22:38.196588  281230 main.go:143] libmachine: Using SSH client type: native
	I1126 20:22:38.196809  281230 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1126 20:22:38.196819  281230 main.go:143] libmachine: About to run SSH command:
	hostname
	I1126 20:22:38.197513  281230 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60668->127.0.0.1:33088: read: connection reset by peer
	I1126 20:22:41.353546  281230 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-949294
	
	I1126 20:22:41.353574  281230 ubuntu.go:182] provisioning hostname "embed-certs-949294"
	I1126 20:22:41.353632  281230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-949294
	I1126 20:22:41.371710  281230 main.go:143] libmachine: Using SSH client type: native
	I1126 20:22:41.371940  281230 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1126 20:22:41.371965  281230 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-949294 && echo "embed-certs-949294" | sudo tee /etc/hostname
	I1126 20:22:41.527011  281230 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-949294
	
	I1126 20:22:41.527082  281230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-949294
	I1126 20:22:41.552128  281230 main.go:143] libmachine: Using SSH client type: native
	I1126 20:22:41.552497  281230 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1126 20:22:41.552529  281230 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-949294' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-949294/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-949294' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1126 20:22:41.706552  281230 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1126 20:22:41.706582  281230 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21974-10722/.minikube CaCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21974-10722/.minikube}
	I1126 20:22:41.706605  281230 ubuntu.go:190] setting up certificates
	I1126 20:22:41.706617  281230 provision.go:84] configureAuth start
	I1126 20:22:41.706674  281230 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-949294
	I1126 20:22:41.731291  281230 provision.go:143] copyHostCerts
	I1126 20:22:41.731358  281230 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-10722/.minikube/key.pem, removing ...
	I1126 20:22:41.731373  281230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-10722/.minikube/key.pem
	I1126 20:22:41.731452  281230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/key.pem (1675 bytes)
	I1126 20:22:41.731672  281230 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-10722/.minikube/ca.pem, removing ...
	I1126 20:22:41.731683  281230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-10722/.minikube/ca.pem
	I1126 20:22:41.731717  281230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/ca.pem (1078 bytes)
	I1126 20:22:41.731789  281230 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-10722/.minikube/cert.pem, removing ...
	I1126 20:22:41.731798  281230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-10722/.minikube/cert.pem
	I1126 20:22:41.731833  281230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/cert.pem (1123 bytes)
	I1126 20:22:41.731947  281230 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem org=jenkins.embed-certs-949294 san=[127.0.0.1 192.168.94.2 embed-certs-949294 localhost minikube]
	I1126 20:22:41.778215  281230 provision.go:177] copyRemoteCerts
	I1126 20:22:41.778266  281230 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1126 20:22:41.778295  281230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-949294
	I1126 20:22:41.797553  281230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/embed-certs-949294/id_rsa Username:docker}
	I1126 20:22:41.908508  281230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1126 20:22:41.927584  281230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1126 20:22:41.944361  281230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1126 20:22:41.960987  281230 provision.go:87] duration metric: took 254.359611ms to configureAuth
	I1126 20:22:41.961014  281230 ubuntu.go:206] setting minikube options for container-runtime
	I1126 20:22:41.961161  281230 config.go:182] Loaded profile config "embed-certs-949294": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:22:41.961244  281230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-949294
	I1126 20:22:41.979703  281230 main.go:143] libmachine: Using SSH client type: native
	I1126 20:22:41.980006  281230 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1126 20:22:41.980032  281230 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1126 20:22:42.318188  281230 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1126 20:22:42.318211  281230 machine.go:97] duration metric: took 4.142808387s to provisionDockerMachine
	I1126 20:22:42.318225  281230 start.go:293] postStartSetup for "embed-certs-949294" (driver="docker")
	I1126 20:22:42.318237  281230 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1126 20:22:42.318297  281230 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1126 20:22:42.318364  281230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-949294
	I1126 20:22:42.338327  281230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/embed-certs-949294/id_rsa Username:docker}
	I1126 20:22:42.438215  281230 ssh_runner.go:195] Run: cat /etc/os-release
	I1126 20:22:42.441404  281230 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1126 20:22:42.441434  281230 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1126 20:22:42.441446  281230 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-10722/.minikube/addons for local assets ...
	I1126 20:22:42.441539  281230 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-10722/.minikube/files for local assets ...
	I1126 20:22:42.441610  281230 filesync.go:149] local asset: /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem -> 142582.pem in /etc/ssl/certs
	I1126 20:22:42.441700  281230 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1126 20:22:42.448842  281230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem --> /etc/ssl/certs/142582.pem (1708 bytes)
	I1126 20:22:42.465662  281230 start.go:296] duration metric: took 147.425996ms for postStartSetup
	I1126 20:22:42.465729  281230 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1126 20:22:42.465774  281230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-949294
	I1126 20:22:42.483672  281230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/embed-certs-949294/id_rsa Username:docker}
	I1126 20:22:42.582571  281230 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1126 20:22:42.589248  281230 fix.go:56] duration metric: took 4.801612317s for fixHost
	I1126 20:22:42.589282  281230 start.go:83] releasing machines lock for "embed-certs-949294", held for 4.801666542s
	I1126 20:22:42.589356  281230 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-949294
	I1126 20:22:42.613599  281230 ssh_runner.go:195] Run: cat /version.json
	I1126 20:22:42.613635  281230 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1126 20:22:42.613653  281230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-949294
	I1126 20:22:42.613694  281230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-949294
	I1126 20:22:42.640998  281230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/embed-certs-949294/id_rsa Username:docker}
	I1126 20:22:42.641470  281230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/embed-certs-949294/id_rsa Username:docker}
	I1126 20:22:42.742494  281230 ssh_runner.go:195] Run: systemctl --version
	I1126 20:22:42.794845  281230 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1126 20:22:42.828506  281230 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1126 20:22:42.833001  281230 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1126 20:22:42.833081  281230 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1126 20:22:42.840611  281230 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1126 20:22:42.840633  281230 start.go:496] detecting cgroup driver to use...
	I1126 20:22:42.840662  281230 detect.go:190] detected "systemd" cgroup driver on host os
	I1126 20:22:42.840704  281230 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1126 20:22:42.854304  281230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1126 20:22:42.865621  281230 docker.go:218] disabling cri-docker service (if available) ...
	I1126 20:22:42.865663  281230 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1126 20:22:42.879121  281230 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1126 20:22:42.890217  281230 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1126 20:22:42.972124  281230 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1126 20:22:43.054010  281230 docker.go:234] disabling docker service ...
	I1126 20:22:43.054076  281230 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1126 20:22:43.067236  281230 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1126 20:22:43.079079  281230 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1126 20:22:43.158407  281230 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1126 20:22:43.236403  281230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1126 20:22:43.249898  281230 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1126 20:22:43.266098  281230 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1126 20:22:43.266169  281230 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:22:43.275593  281230 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1126 20:22:43.275650  281230 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:22:43.286305  281230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:22:43.295428  281230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:22:43.304196  281230 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1126 20:22:43.312078  281230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:22:43.320105  281230 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:22:43.328187  281230 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:22:43.336849  281230 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1126 20:22:43.344213  281230 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1126 20:22:43.351591  281230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:22:43.434081  281230 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1126 20:22:43.584410  281230 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1126 20:22:43.584499  281230 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1126 20:22:43.588269  281230 start.go:564] Will wait 60s for crictl version
	I1126 20:22:43.588336  281230 ssh_runner.go:195] Run: which crictl
	I1126 20:22:43.591767  281230 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1126 20:22:43.614952  281230 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1126 20:22:43.615025  281230 ssh_runner.go:195] Run: crio --version
	I1126 20:22:43.641356  281230 ssh_runner.go:195] Run: crio --version
	I1126 20:22:43.667903  281230 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	W1126 20:22:39.746749  271308 node_ready.go:57] node "default-k8s-diff-port-178152" has "Ready":"False" status (will retry)
	I1126 20:22:42.249658  271308 node_ready.go:49] node "default-k8s-diff-port-178152" is "Ready"
	I1126 20:22:42.250190  271308 node_ready.go:38] duration metric: took 11.006799541s for node "default-k8s-diff-port-178152" to be "Ready" ...
	I1126 20:22:42.250224  271308 api_server.go:52] waiting for apiserver process to appear ...
	I1126 20:22:42.250294  271308 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:22:42.272956  271308 api_server.go:72] duration metric: took 11.381347219s to wait for apiserver process to appear ...
	I1126 20:22:42.272984  271308 api_server.go:88] waiting for apiserver healthz status ...
	I1126 20:22:42.273006  271308 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1126 20:22:42.279175  271308 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1126 20:22:42.280247  271308 api_server.go:141] control plane version: v1.34.1
	I1126 20:22:42.280267  271308 api_server.go:131] duration metric: took 7.276294ms to wait for apiserver health ...
	I1126 20:22:42.280275  271308 system_pods.go:43] waiting for kube-system pods to appear ...
	I1126 20:22:42.283222  271308 system_pods.go:59] 8 kube-system pods found
	I1126 20:22:42.283253  271308 system_pods.go:61] "coredns-66bc5c9577-tpmmm" [20166f90-76ba-4092-aab9-29683f4fc146] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:22:42.283261  271308 system_pods.go:61] "etcd-default-k8s-diff-port-178152" [b1b7fbf2-3a62-4441-9f2a-1fc512882320] Running
	I1126 20:22:42.283266  271308 system_pods.go:61] "kindnet-bmzz2" [ad4ae092-70cd-48b0-9099-854ccce3329d] Running
	I1126 20:22:42.283269  271308 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-178152" [57a5fbc3-1a74-4cd9-af68-a9cb4053ee40] Running
	I1126 20:22:42.283273  271308 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-178152" [41dc1851-5cc2-414e-8ef8-b25e098649c0] Running
	I1126 20:22:42.283280  271308 system_pods.go:61] "kube-proxy-vd7fp" [37371ddf-6fde-4f46-a877-97f4112ff1b2] Running
	I1126 20:22:42.283283  271308 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-178152" [f1a6a26d-5146-4ffc-9bb3-20349686988f] Running
	I1126 20:22:42.283288  271308 system_pods.go:61] "storage-provisioner" [0ed42547-a316-4970-b4e7-f2157c68ac06] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 20:22:42.283293  271308 system_pods.go:74] duration metric: took 3.013459ms to wait for pod list to return data ...
	I1126 20:22:42.283303  271308 default_sa.go:34] waiting for default service account to be created ...
	I1126 20:22:42.285341  271308 default_sa.go:45] found service account: "default"
	I1126 20:22:42.285361  271308 default_sa.go:55] duration metric: took 2.052746ms for default service account to be created ...
	I1126 20:22:42.285368  271308 system_pods.go:116] waiting for k8s-apps to be running ...
	I1126 20:22:42.287817  271308 system_pods.go:86] 8 kube-system pods found
	I1126 20:22:42.287844  271308 system_pods.go:89] "coredns-66bc5c9577-tpmmm" [20166f90-76ba-4092-aab9-29683f4fc146] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:22:42.287851  271308 system_pods.go:89] "etcd-default-k8s-diff-port-178152" [b1b7fbf2-3a62-4441-9f2a-1fc512882320] Running
	I1126 20:22:42.287871  271308 system_pods.go:89] "kindnet-bmzz2" [ad4ae092-70cd-48b0-9099-854ccce3329d] Running
	I1126 20:22:42.287878  271308 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-178152" [57a5fbc3-1a74-4cd9-af68-a9cb4053ee40] Running
	I1126 20:22:42.287906  271308 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-178152" [41dc1851-5cc2-414e-8ef8-b25e098649c0] Running
	I1126 20:22:42.287912  271308 system_pods.go:89] "kube-proxy-vd7fp" [37371ddf-6fde-4f46-a877-97f4112ff1b2] Running
	I1126 20:22:42.287918  271308 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-178152" [f1a6a26d-5146-4ffc-9bb3-20349686988f] Running
	I1126 20:22:42.287927  271308 system_pods.go:89] "storage-provisioner" [0ed42547-a316-4970-b4e7-f2157c68ac06] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 20:22:42.287958  271308 retry.go:31] will retry after 308.61666ms: missing components: kube-dns
	I1126 20:22:42.602933  271308 system_pods.go:86] 8 kube-system pods found
	I1126 20:22:42.602960  271308 system_pods.go:89] "coredns-66bc5c9577-tpmmm" [20166f90-76ba-4092-aab9-29683f4fc146] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:22:42.602966  271308 system_pods.go:89] "etcd-default-k8s-diff-port-178152" [b1b7fbf2-3a62-4441-9f2a-1fc512882320] Running
	I1126 20:22:42.602971  271308 system_pods.go:89] "kindnet-bmzz2" [ad4ae092-70cd-48b0-9099-854ccce3329d] Running
	I1126 20:22:42.602975  271308 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-178152" [57a5fbc3-1a74-4cd9-af68-a9cb4053ee40] Running
	I1126 20:22:42.602979  271308 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-178152" [41dc1851-5cc2-414e-8ef8-b25e098649c0] Running
	I1126 20:22:42.602982  271308 system_pods.go:89] "kube-proxy-vd7fp" [37371ddf-6fde-4f46-a877-97f4112ff1b2] Running
	I1126 20:22:42.602985  271308 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-178152" [f1a6a26d-5146-4ffc-9bb3-20349686988f] Running
	I1126 20:22:42.602989  271308 system_pods.go:89] "storage-provisioner" [0ed42547-a316-4970-b4e7-f2157c68ac06] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 20:22:42.603002  271308 retry.go:31] will retry after 352.870646ms: missing components: kube-dns
	I1126 20:22:42.960487  271308 system_pods.go:86] 8 kube-system pods found
	I1126 20:22:42.960513  271308 system_pods.go:89] "coredns-66bc5c9577-tpmmm" [20166f90-76ba-4092-aab9-29683f4fc146] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:22:42.960519  271308 system_pods.go:89] "etcd-default-k8s-diff-port-178152" [b1b7fbf2-3a62-4441-9f2a-1fc512882320] Running
	I1126 20:22:42.960525  271308 system_pods.go:89] "kindnet-bmzz2" [ad4ae092-70cd-48b0-9099-854ccce3329d] Running
	I1126 20:22:42.960532  271308 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-178152" [57a5fbc3-1a74-4cd9-af68-a9cb4053ee40] Running
	I1126 20:22:42.960536  271308 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-178152" [41dc1851-5cc2-414e-8ef8-b25e098649c0] Running
	I1126 20:22:42.960545  271308 system_pods.go:89] "kube-proxy-vd7fp" [37371ddf-6fde-4f46-a877-97f4112ff1b2] Running
	I1126 20:22:42.960550  271308 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-178152" [f1a6a26d-5146-4ffc-9bb3-20349686988f] Running
	I1126 20:22:42.960554  271308 system_pods.go:89] "storage-provisioner" [0ed42547-a316-4970-b4e7-f2157c68ac06] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 20:22:42.960567  271308 retry.go:31] will retry after 370.669224ms: missing components: kube-dns
	I1126 20:22:43.336323  271308 system_pods.go:86] 8 kube-system pods found
	I1126 20:22:43.336368  271308 system_pods.go:89] "coredns-66bc5c9577-tpmmm" [20166f90-76ba-4092-aab9-29683f4fc146] Running
	I1126 20:22:43.336377  271308 system_pods.go:89] "etcd-default-k8s-diff-port-178152" [b1b7fbf2-3a62-4441-9f2a-1fc512882320] Running
	I1126 20:22:43.336384  271308 system_pods.go:89] "kindnet-bmzz2" [ad4ae092-70cd-48b0-9099-854ccce3329d] Running
	I1126 20:22:43.336390  271308 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-178152" [57a5fbc3-1a74-4cd9-af68-a9cb4053ee40] Running
	I1126 20:22:43.336401  271308 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-178152" [41dc1851-5cc2-414e-8ef8-b25e098649c0] Running
	I1126 20:22:43.336406  271308 system_pods.go:89] "kube-proxy-vd7fp" [37371ddf-6fde-4f46-a877-97f4112ff1b2] Running
	I1126 20:22:43.336412  271308 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-178152" [f1a6a26d-5146-4ffc-9bb3-20349686988f] Running
	I1126 20:22:43.336420  271308 system_pods.go:89] "storage-provisioner" [0ed42547-a316-4970-b4e7-f2157c68ac06] Running
	I1126 20:22:43.336429  271308 system_pods.go:126] duration metric: took 1.051054713s to wait for k8s-apps to be running ...
	I1126 20:22:43.336442  271308 system_svc.go:44] waiting for kubelet service to be running ....
	I1126 20:22:43.336492  271308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:22:43.349377  271308 system_svc.go:56] duration metric: took 12.93002ms WaitForService to wait for kubelet
	I1126 20:22:43.349397  271308 kubeadm.go:587] duration metric: took 12.457793394s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1126 20:22:43.349410  271308 node_conditions.go:102] verifying NodePressure condition ...
	I1126 20:22:43.352231  271308 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1126 20:22:43.352254  271308 node_conditions.go:123] node cpu capacity is 8
	I1126 20:22:43.352270  271308 node_conditions.go:105] duration metric: took 2.855748ms to run NodePressure ...
	I1126 20:22:43.352281  271308 start.go:242] waiting for startup goroutines ...
	I1126 20:22:43.352290  271308 start.go:247] waiting for cluster config update ...
	I1126 20:22:43.352299  271308 start.go:256] writing updated cluster config ...
	I1126 20:22:43.352549  271308 ssh_runner.go:195] Run: rm -f paused
	I1126 20:22:43.356029  271308 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 20:22:43.359306  271308 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-tpmmm" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:22:43.363412  271308 pod_ready.go:94] pod "coredns-66bc5c9577-tpmmm" is "Ready"
	I1126 20:22:43.363435  271308 pod_ready.go:86] duration metric: took 4.112055ms for pod "coredns-66bc5c9577-tpmmm" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:22:43.365248  271308 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-178152" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:22:43.368843  271308 pod_ready.go:94] pod "etcd-default-k8s-diff-port-178152" is "Ready"
	I1126 20:22:43.368862  271308 pod_ready.go:86] duration metric: took 3.598035ms for pod "etcd-default-k8s-diff-port-178152" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:22:43.370559  271308 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-178152" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:22:43.373917  271308 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-178152" is "Ready"
	I1126 20:22:43.373937  271308 pod_ready.go:86] duration metric: took 3.359149ms for pod "kube-apiserver-default-k8s-diff-port-178152" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:22:43.375639  271308 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-178152" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:22:43.760756  271308 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-178152" is "Ready"
	I1126 20:22:43.760788  271308 pod_ready.go:86] duration metric: took 385.124259ms for pod "kube-controller-manager-default-k8s-diff-port-178152" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:22:43.960061  271308 pod_ready.go:83] waiting for pod "kube-proxy-vd7fp" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:22:44.359897  271308 pod_ready.go:94] pod "kube-proxy-vd7fp" is "Ready"
	I1126 20:22:44.359924  271308 pod_ready.go:86] duration metric: took 399.838276ms for pod "kube-proxy-vd7fp" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:22:44.560435  271308 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-178152" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:22:43.668973  281230 cli_runner.go:164] Run: docker network inspect embed-certs-949294 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1126 20:22:43.686898  281230 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1126 20:22:43.690943  281230 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:22:43.701122  281230 kubeadm.go:884] updating cluster {Name:embed-certs-949294 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-949294 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1126 20:22:43.701233  281230 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:22:43.701286  281230 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:22:43.733576  281230 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:22:43.733598  281230 crio.go:433] Images already preloaded, skipping extraction
	I1126 20:22:43.733638  281230 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:22:43.757784  281230 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:22:43.757801  281230 cache_images.go:86] Images are preloaded, skipping loading
	I1126 20:22:43.757809  281230 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1126 20:22:43.757903  281230 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-949294 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-949294 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1126 20:22:43.757958  281230 ssh_runner.go:195] Run: crio config
	I1126 20:22:43.801014  281230 cni.go:84] Creating CNI manager for ""
	I1126 20:22:43.801042  281230 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:22:43.801062  281230 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1126 20:22:43.801091  281230 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-949294 NodeName:embed-certs-949294 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1126 20:22:43.801281  281230 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-949294"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1126 20:22:43.801354  281230 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1126 20:22:43.809139  281230 binaries.go:51] Found k8s binaries, skipping transfer
	I1126 20:22:43.809185  281230 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1126 20:22:43.816443  281230 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1126 20:22:43.828647  281230 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1126 20:22:43.842109  281230 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1126 20:22:43.853618  281230 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1126 20:22:43.856940  281230 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:22:43.866100  281230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:22:43.974877  281230 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:22:43.999946  281230 certs.go:69] Setting up /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/embed-certs-949294 for IP: 192.168.94.2
	I1126 20:22:43.999968  281230 certs.go:195] generating shared ca certs ...
	I1126 20:22:43.999990  281230 certs.go:227] acquiring lock for ca certs: {Name:mk4795cc985d88cb969cdd6a9c35d3c72f02dfea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:22:44.000162  281230 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21974-10722/.minikube/ca.key
	I1126 20:22:44.000228  281230 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.key
	I1126 20:22:44.000242  281230 certs.go:257] generating profile certs ...
	I1126 20:22:44.000348  281230 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/embed-certs-949294/client.key
	I1126 20:22:44.000422  281230 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/embed-certs-949294/apiserver.key.5bee8ac0
	I1126 20:22:44.000502  281230 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/embed-certs-949294/proxy-client.key
	I1126 20:22:44.000653  281230 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/14258.pem (1338 bytes)
	W1126 20:22:44.000697  281230 certs.go:480] ignoring /home/jenkins/minikube-integration/21974-10722/.minikube/certs/14258_empty.pem, impossibly tiny 0 bytes
	I1126 20:22:44.000711  281230 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem (1679 bytes)
	I1126 20:22:44.000754  281230 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem (1078 bytes)
	I1126 20:22:44.000799  281230 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem (1123 bytes)
	I1126 20:22:44.000834  281230 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem (1675 bytes)
	I1126 20:22:44.000897  281230 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem (1708 bytes)
	I1126 20:22:44.001493  281230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1126 20:22:44.019892  281230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1126 20:22:44.040066  281230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1126 20:22:44.057726  281230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1126 20:22:44.081328  281230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/embed-certs-949294/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1126 20:22:44.098058  281230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/embed-certs-949294/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1126 20:22:44.113934  281230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/embed-certs-949294/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1126 20:22:44.129831  281230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/embed-certs-949294/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1126 20:22:44.145588  281230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem --> /usr/share/ca-certificates/142582.pem (1708 bytes)
	I1126 20:22:44.161958  281230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1126 20:22:44.178404  281230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/certs/14258.pem --> /usr/share/ca-certificates/14258.pem (1338 bytes)
	I1126 20:22:44.195831  281230 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1126 20:22:44.207337  281230 ssh_runner.go:195] Run: openssl version
	I1126 20:22:44.213097  281230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142582.pem && ln -fs /usr/share/ca-certificates/142582.pem /etc/ssl/certs/142582.pem"
	I1126 20:22:44.220687  281230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142582.pem
	I1126 20:22:44.224116  281230 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 26 19:41 /usr/share/ca-certificates/142582.pem
	I1126 20:22:44.224164  281230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142582.pem
	I1126 20:22:44.258977  281230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142582.pem /etc/ssl/certs/3ec20f2e.0"
	I1126 20:22:44.267014  281230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1126 20:22:44.275688  281230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:22:44.279299  281230 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 26 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:22:44.279349  281230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:22:44.314548  281230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1126 20:22:44.322323  281230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14258.pem && ln -fs /usr/share/ca-certificates/14258.pem /etc/ssl/certs/14258.pem"
	I1126 20:22:44.331309  281230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14258.pem
	I1126 20:22:44.334747  281230 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 26 19:41 /usr/share/ca-certificates/14258.pem
	I1126 20:22:44.334792  281230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14258.pem
	I1126 20:22:44.369194  281230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14258.pem /etc/ssl/certs/51391683.0"
	I1126 20:22:44.377304  281230 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1126 20:22:44.381220  281230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1126 20:22:44.417889  281230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1126 20:22:44.454503  281230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1126 20:22:44.491150  281230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1126 20:22:44.542762  281230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1126 20:22:44.589987  281230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1126 20:22:44.653209  281230 kubeadm.go:401] StartCluster: {Name:embed-certs-949294 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-949294 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:22:44.653317  281230 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 20:22:44.653402  281230 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 20:22:44.698166  281230 cri.go:89] found id: "d0edb51e9e5fccff0dc7134b628a507303eb3f0ea693960b2ef07c819ccfcfb3"
	I1126 20:22:44.698189  281230 cri.go:89] found id: "1247796ab1281fb5ee5525e146ff9c03a80b5e36662073b88f7b4335b21630fa"
	I1126 20:22:44.698194  281230 cri.go:89] found id: "f265ea81a09610c82177779173f228ed110d405daaad945e9224abda5afc655e"
	I1126 20:22:44.698199  281230 cri.go:89] found id: "27f1868f6da521ee9bf27fc5eccba3b561b07052f526f759dfbebcc382b682e0"
	I1126 20:22:44.698204  281230 cri.go:89] found id: ""
	I1126 20:22:44.698249  281230 ssh_runner.go:195] Run: sudo runc list -f json
	W1126 20:22:44.712857  281230 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:22:44Z" level=error msg="open /run/runc: no such file or directory"
	I1126 20:22:44.712953  281230 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1126 20:22:44.721110  281230 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1126 20:22:44.721122  281230 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1126 20:22:44.721219  281230 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1126 20:22:44.728115  281230 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1126 20:22:44.728769  281230 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-949294" does not appear in /home/jenkins/minikube-integration/21974-10722/kubeconfig
	I1126 20:22:44.729067  281230 kubeconfig.go:62] /home/jenkins/minikube-integration/21974-10722/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-949294" cluster setting kubeconfig missing "embed-certs-949294" context setting]
	I1126 20:22:44.729783  281230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/kubeconfig: {Name:mk6fc897a3190afd853d8f239c80394df10dbd39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:22:44.731342  281230 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1126 20:22:44.739276  281230 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1126 20:22:44.739300  281230 kubeadm.go:602] duration metric: took 18.174206ms to restartPrimaryControlPlane
	I1126 20:22:44.739307  281230 kubeadm.go:403] duration metric: took 86.108546ms to StartCluster
	I1126 20:22:44.739318  281230 settings.go:142] acquiring lock: {Name:mkeab3f1dbcfb88a16fda32bff13c520a9a811ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:22:44.739377  281230 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21974-10722/kubeconfig
	I1126 20:22:44.740675  281230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/kubeconfig: {Name:mk6fc897a3190afd853d8f239c80394df10dbd39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:22:44.740856  281230 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1126 20:22:44.741084  281230 config.go:182] Loaded profile config "embed-certs-949294": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:22:44.741124  281230 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1126 20:22:44.741179  281230 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-949294"
	I1126 20:22:44.741193  281230 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-949294"
	W1126 20:22:44.741198  281230 addons.go:248] addon storage-provisioner should already be in state true
	I1126 20:22:44.741214  281230 host.go:66] Checking if "embed-certs-949294" exists ...
	I1126 20:22:44.741554  281230 cli_runner.go:164] Run: docker container inspect embed-certs-949294 --format={{.State.Status}}
	I1126 20:22:44.741639  281230 addons.go:70] Setting dashboard=true in profile "embed-certs-949294"
	I1126 20:22:44.741651  281230 addons.go:70] Setting default-storageclass=true in profile "embed-certs-949294"
	I1126 20:22:44.741668  281230 addons.go:239] Setting addon dashboard=true in "embed-certs-949294"
	I1126 20:22:44.741669  281230 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-949294"
	W1126 20:22:44.741678  281230 addons.go:248] addon dashboard should already be in state true
	I1126 20:22:44.741729  281230 host.go:66] Checking if "embed-certs-949294" exists ...
	I1126 20:22:44.741928  281230 cli_runner.go:164] Run: docker container inspect embed-certs-949294 --format={{.State.Status}}
	I1126 20:22:44.742228  281230 cli_runner.go:164] Run: docker container inspect embed-certs-949294 --format={{.State.Status}}
	I1126 20:22:44.742329  281230 out.go:179] * Verifying Kubernetes components...
	I1126 20:22:44.745728  281230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:22:44.769720  281230 addons.go:239] Setting addon default-storageclass=true in "embed-certs-949294"
	W1126 20:22:44.769745  281230 addons.go:248] addon default-storageclass should already be in state true
	I1126 20:22:44.769776  281230 host.go:66] Checking if "embed-certs-949294" exists ...
	I1126 20:22:44.770229  281230 cli_runner.go:164] Run: docker container inspect embed-certs-949294 --format={{.State.Status}}
	I1126 20:22:44.770534  281230 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1126 20:22:44.771603  281230 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1126 20:22:44.771655  281230 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:22:44.771665  281230 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1126 20:22:44.771726  281230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-949294
	I1126 20:22:44.773363  281230 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1126 20:22:44.961735  271308 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-178152" is "Ready"
	I1126 20:22:44.961781  271308 pod_ready.go:86] duration metric: took 401.291943ms for pod "kube-scheduler-default-k8s-diff-port-178152" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:22:44.961797  271308 pod_ready.go:40] duration metric: took 1.605738411s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 20:22:45.024642  271308 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1126 20:22:45.028340  271308 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-178152" cluster and "default" namespace by default
	I1126 20:22:41.130916  283132 out.go:252] * Restarting existing docker container for "newest-cni-297942" ...
	I1126 20:22:41.130973  283132 cli_runner.go:164] Run: docker start newest-cni-297942
	I1126 20:22:41.417598  283132 cli_runner.go:164] Run: docker container inspect newest-cni-297942 --format={{.State.Status}}
	I1126 20:22:41.436343  283132 kic.go:430] container "newest-cni-297942" state is running.
	I1126 20:22:41.436757  283132 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-297942
	I1126 20:22:41.454760  283132 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/newest-cni-297942/config.json ...
	I1126 20:22:41.454963  283132 machine.go:94] provisionDockerMachine start ...
	I1126 20:22:41.455014  283132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-297942
	I1126 20:22:41.473682  283132 main.go:143] libmachine: Using SSH client type: native
	I1126 20:22:41.473897  283132 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1126 20:22:41.473908  283132 main.go:143] libmachine: About to run SSH command:
	hostname
	I1126 20:22:41.474510  283132 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53014->127.0.0.1:33093: read: connection reset by peer
	I1126 20:22:44.628083  283132 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-297942
	
	I1126 20:22:44.628112  283132 ubuntu.go:182] provisioning hostname "newest-cni-297942"
	I1126 20:22:44.628888  283132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-297942
	I1126 20:22:44.654951  283132 main.go:143] libmachine: Using SSH client type: native
	I1126 20:22:44.655280  283132 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1126 20:22:44.655300  283132 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-297942 && echo "newest-cni-297942" | sudo tee /etc/hostname
	I1126 20:22:44.836325  283132 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-297942
	
	I1126 20:22:44.836408  283132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-297942
	I1126 20:22:44.860919  283132 main.go:143] libmachine: Using SSH client type: native
	I1126 20:22:44.861149  283132 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1126 20:22:44.861181  283132 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-297942' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-297942/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-297942' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1126 20:22:45.024750  283132 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1126 20:22:45.024885  283132 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21974-10722/.minikube CaCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21974-10722/.minikube}
	I1126 20:22:45.024931  283132 ubuntu.go:190] setting up certificates
	I1126 20:22:45.025025  283132 provision.go:84] configureAuth start
	I1126 20:22:45.025434  283132 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-297942
	I1126 20:22:45.053790  283132 provision.go:143] copyHostCerts
	I1126 20:22:45.054123  283132 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-10722/.minikube/ca.pem, removing ...
	I1126 20:22:45.054181  283132 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-10722/.minikube/ca.pem
	I1126 20:22:45.054621  283132 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/ca.pem (1078 bytes)
	I1126 20:22:45.054815  283132 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-10722/.minikube/cert.pem, removing ...
	I1126 20:22:45.054941  283132 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-10722/.minikube/cert.pem
	I1126 20:22:45.056077  283132 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/cert.pem (1123 bytes)
	I1126 20:22:45.056254  283132 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-10722/.minikube/key.pem, removing ...
	I1126 20:22:45.056282  283132 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-10722/.minikube/key.pem
	I1126 20:22:45.056373  283132 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/key.pem (1675 bytes)
	I1126 20:22:45.056499  283132 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem org=jenkins.newest-cni-297942 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-297942]
	I1126 20:22:45.148820  283132 provision.go:177] copyRemoteCerts
	I1126 20:22:45.148880  283132 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1126 20:22:45.148938  283132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-297942
	I1126 20:22:45.175942  283132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/newest-cni-297942/id_rsa Username:docker}
	I1126 20:22:45.287084  283132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1126 20:22:45.308935  283132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1126 20:22:45.325992  283132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1126 20:22:45.342613  283132 provision.go:87] duration metric: took 317.575317ms to configureAuth
	I1126 20:22:45.342637  283132 ubuntu.go:206] setting minikube options for container-runtime
	I1126 20:22:45.342828  283132 config.go:182] Loaded profile config "newest-cni-297942": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:22:45.342955  283132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-297942
	I1126 20:22:45.362599  283132 main.go:143] libmachine: Using SSH client type: native
	I1126 20:22:45.362913  283132 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1126 20:22:45.362936  283132 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1126 20:22:45.681202  283132 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1126 20:22:45.681227  283132 machine.go:97] duration metric: took 4.226250286s to provisionDockerMachine
	I1126 20:22:45.681240  283132 start.go:293] postStartSetup for "newest-cni-297942" (driver="docker")
	I1126 20:22:45.681252  283132 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1126 20:22:45.681306  283132 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1126 20:22:45.681356  283132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-297942
	I1126 20:22:45.705211  283132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/newest-cni-297942/id_rsa Username:docker}
	I1126 20:22:45.819521  283132 ssh_runner.go:195] Run: cat /etc/os-release
	I1126 20:22:45.823878  283132 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1126 20:22:45.823902  283132 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1126 20:22:45.823911  283132 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-10722/.minikube/addons for local assets ...
	I1126 20:22:45.823957  283132 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-10722/.minikube/files for local assets ...
	I1126 20:22:45.824019  283132 filesync.go:149] local asset: /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem -> 142582.pem in /etc/ssl/certs
	I1126 20:22:45.824103  283132 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1126 20:22:45.832396  283132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem --> /etc/ssl/certs/142582.pem (1708 bytes)
	I1126 20:22:45.855936  283132 start.go:296] duration metric: took 174.682288ms for postStartSetup
	I1126 20:22:45.856010  283132 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1126 20:22:45.856070  283132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-297942
	I1126 20:22:45.877896  283132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/newest-cni-297942/id_rsa Username:docker}
	I1126 20:22:42.153037  279050 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1126 20:22:42.157427  279050 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1126 20:22:42.158369  279050 api_server.go:141] control plane version: v1.34.1
	I1126 20:22:42.158392  279050 api_server.go:131] duration metric: took 1.005661792s to wait for apiserver health ...
	I1126 20:22:42.158401  279050 system_pods.go:43] waiting for kube-system pods to appear ...
	I1126 20:22:42.161910  279050 system_pods.go:59] 8 kube-system pods found
	I1126 20:22:42.161934  279050 system_pods.go:61] "coredns-66bc5c9577-wl4xp" [e1cf9739-1b9a-44d7-a932-447ac94e142d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:22:42.161942  279050 system_pods.go:61] "etcd-no-preload-026579" [cce16df8-4867-47fc-acec-5b1651799367] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1126 20:22:42.161952  279050 system_pods.go:61] "kindnet-8rfpj" [09d32618-edb6-49f0-b9ce-af0f0751b53f] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1126 20:22:42.161968  279050 system_pods.go:61] "kube-apiserver-no-preload-026579" [4119730b-bf6c-4d48-9883-13ca12edeabc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1126 20:22:42.161984  279050 system_pods.go:61] "kube-controller-manager-no-preload-026579" [c583b53a-df53-4d77-995f-c6c7d1189418] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1126 20:22:42.161995  279050 system_pods.go:61] "kube-proxy-ktbwp" [93566a91-b6dc-47fa-9d46-9ebf0fc4704a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1126 20:22:42.162008  279050 system_pods.go:61] "kube-scheduler-no-preload-026579" [0ae4f2d1-175e-45c0-bbb2-eedf826e6fb4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1126 20:22:42.162015  279050 system_pods.go:61] "storage-provisioner" [e2f8aa92-297b-4be4-a3a2-45a956763aad] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 20:22:42.162021  279050 system_pods.go:74] duration metric: took 3.614709ms to wait for pod list to return data ...
	I1126 20:22:42.162029  279050 default_sa.go:34] waiting for default service account to be created ...
	I1126 20:22:42.164140  279050 default_sa.go:45] found service account: "default"
	I1126 20:22:42.164157  279050 default_sa.go:55] duration metric: took 2.123726ms for default service account to be created ...
	I1126 20:22:42.164165  279050 system_pods.go:116] waiting for k8s-apps to be running ...
	I1126 20:22:42.166895  279050 system_pods.go:86] 8 kube-system pods found
	I1126 20:22:42.166923  279050 system_pods.go:89] "coredns-66bc5c9577-wl4xp" [e1cf9739-1b9a-44d7-a932-447ac94e142d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:22:42.166933  279050 system_pods.go:89] "etcd-no-preload-026579" [cce16df8-4867-47fc-acec-5b1651799367] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1126 20:22:42.166942  279050 system_pods.go:89] "kindnet-8rfpj" [09d32618-edb6-49f0-b9ce-af0f0751b53f] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1126 20:22:42.166955  279050 system_pods.go:89] "kube-apiserver-no-preload-026579" [4119730b-bf6c-4d48-9883-13ca12edeabc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1126 20:22:42.166963  279050 system_pods.go:89] "kube-controller-manager-no-preload-026579" [c583b53a-df53-4d77-995f-c6c7d1189418] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1126 20:22:42.166986  279050 system_pods.go:89] "kube-proxy-ktbwp" [93566a91-b6dc-47fa-9d46-9ebf0fc4704a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1126 20:22:42.167013  279050 system_pods.go:89] "kube-scheduler-no-preload-026579" [0ae4f2d1-175e-45c0-bbb2-eedf826e6fb4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1126 20:22:42.167025  279050 system_pods.go:89] "storage-provisioner" [e2f8aa92-297b-4be4-a3a2-45a956763aad] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 20:22:42.167036  279050 system_pods.go:126] duration metric: took 2.86619ms to wait for k8s-apps to be running ...
	I1126 20:22:42.167048  279050 system_svc.go:44] waiting for kubelet service to be running ....
	I1126 20:22:42.167096  279050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:22:42.179063  279050 system_svc.go:56] duration metric: took 12.010286ms WaitForService to wait for kubelet
	I1126 20:22:42.179086  279050 kubeadm.go:587] duration metric: took 3.056293076s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1126 20:22:42.179104  279050 node_conditions.go:102] verifying NodePressure condition ...
	I1126 20:22:42.181486  279050 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1126 20:22:42.181505  279050 node_conditions.go:123] node cpu capacity is 8
	I1126 20:22:42.181517  279050 node_conditions.go:105] duration metric: took 2.408547ms to run NodePressure ...
	I1126 20:22:42.181527  279050 start.go:242] waiting for startup goroutines ...
	I1126 20:22:42.181536  279050 start.go:247] waiting for cluster config update ...
	I1126 20:22:42.181545  279050 start.go:256] writing updated cluster config ...
	I1126 20:22:42.181758  279050 ssh_runner.go:195] Run: rm -f paused
	I1126 20:22:42.185430  279050 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 20:22:42.188391  279050 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wl4xp" in "kube-system" namespace to be "Ready" or be gone ...
	W1126 20:22:44.193372  279050 pod_ready.go:104] pod "coredns-66bc5c9577-wl4xp" is not "Ready", error: <nil>
	W1126 20:22:46.193941  279050 pod_ready.go:104] pod "coredns-66bc5c9577-wl4xp" is not "Ready", error: <nil>
	I1126 20:22:44.775191  281230 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1126 20:22:44.775236  281230 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1126 20:22:44.775284  281230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-949294
	I1126 20:22:44.802026  281230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/embed-certs-949294/id_rsa Username:docker}
	I1126 20:22:44.804445  281230 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1126 20:22:44.804510  281230 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1126 20:22:44.804668  281230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-949294
	I1126 20:22:44.809300  281230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/embed-certs-949294/id_rsa Username:docker}
	I1126 20:22:44.836635  281230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/embed-certs-949294/id_rsa Username:docker}
	I1126 20:22:44.906402  281230 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:22:44.926683  281230 node_ready.go:35] waiting up to 6m0s for node "embed-certs-949294" to be "Ready" ...
	I1126 20:22:44.942037  281230 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:22:44.943189  281230 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1126 20:22:44.943289  281230 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1126 20:22:44.958004  281230 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1126 20:22:44.964275  281230 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1126 20:22:44.964293  281230 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1126 20:22:44.988499  281230 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1126 20:22:44.988525  281230 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1126 20:22:45.008309  281230 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1126 20:22:45.008331  281230 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1126 20:22:45.030026  281230 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1126 20:22:45.030061  281230 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1126 20:22:45.054222  281230 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1126 20:22:45.054247  281230 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1126 20:22:45.075321  281230 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1126 20:22:45.075344  281230 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1126 20:22:45.092705  281230 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1126 20:22:45.092729  281230 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1126 20:22:45.109718  281230 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1126 20:22:45.109739  281230 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1126 20:22:45.123556  281230 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1126 20:22:46.834584  281230 node_ready.go:49] node "embed-certs-949294" is "Ready"
	I1126 20:22:46.834631  281230 node_ready.go:38] duration metric: took 1.907908732s for node "embed-certs-949294" to be "Ready" ...
	I1126 20:22:46.834647  281230 api_server.go:52] waiting for apiserver process to appear ...
	I1126 20:22:46.834802  281230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:22:47.646270  281230 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.704087312s)
	I1126 20:22:47.646325  281230 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.688236386s)
	I1126 20:22:47.646452  281230 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.522860781s)
	I1126 20:22:47.646922  281230 api_server.go:72] duration metric: took 2.906037516s to wait for apiserver process to appear ...
	I1126 20:22:47.646942  281230 api_server.go:88] waiting for apiserver healthz status ...
	I1126 20:22:47.646959  281230 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1126 20:22:47.650745  281230 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-949294 addons enable metrics-server
	
	I1126 20:22:45.981988  283132 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1126 20:22:45.987899  283132 fix.go:56] duration metric: took 4.876561031s for fixHost
	I1126 20:22:45.987927  283132 start.go:83] releasing machines lock for "newest-cni-297942", held for 4.876610638s
	I1126 20:22:45.987992  283132 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-297942
	I1126 20:22:46.011274  283132 ssh_runner.go:195] Run: cat /version.json
	I1126 20:22:46.011335  283132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-297942
	I1126 20:22:46.011553  283132 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1126 20:22:46.011634  283132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-297942
	I1126 20:22:46.035874  283132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/newest-cni-297942/id_rsa Username:docker}
	I1126 20:22:46.038422  283132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/newest-cni-297942/id_rsa Username:docker}
	I1126 20:22:46.145928  283132 ssh_runner.go:195] Run: systemctl --version
	I1126 20:22:46.208754  283132 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1126 20:22:46.260685  283132 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1126 20:22:46.266786  283132 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1126 20:22:46.266850  283132 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1126 20:22:46.279170  283132 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1126 20:22:46.279196  283132 start.go:496] detecting cgroup driver to use...
	I1126 20:22:46.279228  283132 detect.go:190] detected "systemd" cgroup driver on host os
	I1126 20:22:46.279279  283132 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1126 20:22:46.296769  283132 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1126 20:22:46.312842  283132 docker.go:218] disabling cri-docker service (if available) ...
	I1126 20:22:46.313623  283132 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1126 20:22:46.336404  283132 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1126 20:22:46.362833  283132 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1126 20:22:46.485694  283132 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1126 20:22:46.608625  283132 docker.go:234] disabling docker service ...
	I1126 20:22:46.608710  283132 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1126 20:22:46.627969  283132 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1126 20:22:46.647325  283132 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1126 20:22:46.777835  283132 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1126 20:22:46.941504  283132 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1126 20:22:46.960693  283132 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1126 20:22:46.980499  283132 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1126 20:22:46.980558  283132 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:22:46.994995  283132 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1126 20:22:46.995161  283132 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:22:47.007396  283132 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:22:47.019337  283132 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:22:47.031265  283132 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1126 20:22:47.041699  283132 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:22:47.052215  283132 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:22:47.063748  283132 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:22:47.075564  283132 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1126 20:22:47.087066  283132 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1126 20:22:47.098156  283132 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:22:47.230987  283132 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1126 20:22:47.533145  283132 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1126 20:22:47.533212  283132 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1126 20:22:47.539562  283132 start.go:564] Will wait 60s for crictl version
	I1126 20:22:47.539619  283132 ssh_runner.go:195] Run: which crictl
	I1126 20:22:47.545726  283132 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1126 20:22:47.577381  283132 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1126 20:22:47.577482  283132 ssh_runner.go:195] Run: crio --version
	I1126 20:22:47.614544  283132 ssh_runner.go:195] Run: crio --version
	I1126 20:22:47.654164  283132 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1126 20:22:47.652263  281230 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1126 20:22:47.652284  281230 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1126 20:22:47.661749  281230 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1126 20:22:47.655252  283132 cli_runner.go:164] Run: docker network inspect newest-cni-297942 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1126 20:22:47.676378  283132 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1126 20:22:47.681380  283132 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:22:47.696551  283132 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1126 20:22:47.697725  283132 kubeadm.go:884] updating cluster {Name:newest-cni-297942 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-297942 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1126 20:22:47.697864  283132 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:22:47.697953  283132 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:22:47.737614  283132 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:22:47.737644  283132 crio.go:433] Images already preloaded, skipping extraction
	I1126 20:22:47.737710  283132 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:22:47.769807  283132 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:22:47.769838  283132 cache_images.go:86] Images are preloaded, skipping loading
	I1126 20:22:47.769848  283132 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1126 20:22:47.769987  283132 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-297942 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-297942 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1126 20:22:47.770072  283132 ssh_runner.go:195] Run: crio config
	I1126 20:22:47.833805  283132 cni.go:84] Creating CNI manager for ""
	I1126 20:22:47.833849  283132 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:22:47.833867  283132 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1126 20:22:47.833903  283132 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-297942 NodeName:newest-cni-297942 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1126 20:22:47.834082  283132 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-297942"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1126 20:22:47.834169  283132 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1126 20:22:47.843484  283132 binaries.go:51] Found k8s binaries, skipping transfer
	I1126 20:22:47.843547  283132 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1126 20:22:47.853856  283132 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1126 20:22:47.868846  283132 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1126 20:22:47.885385  283132 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1126 20:22:47.903633  283132 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1126 20:22:47.908802  283132 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:22:47.922224  283132 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:22:48.037628  283132 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:22:48.069247  283132 certs.go:69] Setting up /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/newest-cni-297942 for IP: 192.168.103.2
	I1126 20:22:48.069272  283132 certs.go:195] generating shared ca certs ...
	I1126 20:22:48.069292  283132 certs.go:227] acquiring lock for ca certs: {Name:mk4795cc985d88cb969cdd6a9c35d3c72f02dfea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:22:48.069497  283132 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21974-10722/.minikube/ca.key
	I1126 20:22:48.069570  283132 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.key
	I1126 20:22:48.069587  283132 certs.go:257] generating profile certs ...
	I1126 20:22:48.069711  283132 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/newest-cni-297942/client.key
	I1126 20:22:48.069784  283132 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/newest-cni-297942/apiserver.key.9b9f8b84
	I1126 20:22:48.069880  283132 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/newest-cni-297942/proxy-client.key
	I1126 20:22:48.070067  283132 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/14258.pem (1338 bytes)
	W1126 20:22:48.070122  283132 certs.go:480] ignoring /home/jenkins/minikube-integration/21974-10722/.minikube/certs/14258_empty.pem, impossibly tiny 0 bytes
	I1126 20:22:48.070133  283132 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem (1679 bytes)
	I1126 20:22:48.070169  283132 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem (1078 bytes)
	I1126 20:22:48.070199  283132 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem (1123 bytes)
	I1126 20:22:48.070235  283132 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem (1675 bytes)
	I1126 20:22:48.070293  283132 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem (1708 bytes)
	I1126 20:22:48.071194  283132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1126 20:22:48.097890  283132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1126 20:22:48.121561  283132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1126 20:22:48.146613  283132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1126 20:22:48.176193  283132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/newest-cni-297942/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1126 20:22:48.202051  283132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/newest-cni-297942/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1126 20:22:48.225070  283132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/newest-cni-297942/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1126 20:22:48.246760  283132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/newest-cni-297942/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1126 20:22:48.269084  283132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem --> /usr/share/ca-certificates/142582.pem (1708 bytes)
	I1126 20:22:48.292062  283132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1126 20:22:48.313735  283132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/certs/14258.pem --> /usr/share/ca-certificates/14258.pem (1338 bytes)
	I1126 20:22:48.335657  283132 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1126 20:22:48.351074  283132 ssh_runner.go:195] Run: openssl version
	I1126 20:22:48.358937  283132 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142582.pem && ln -fs /usr/share/ca-certificates/142582.pem /etc/ssl/certs/142582.pem"
	I1126 20:22:48.369856  283132 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142582.pem
	I1126 20:22:48.375367  283132 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 26 19:41 /usr/share/ca-certificates/142582.pem
	I1126 20:22:48.375419  283132 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142582.pem
	I1126 20:22:48.428766  283132 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142582.pem /etc/ssl/certs/3ec20f2e.0"
	I1126 20:22:48.439674  283132 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1126 20:22:48.450900  283132 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:22:48.455705  283132 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 26 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:22:48.455757  283132 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:22:48.509707  283132 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1126 20:22:48.520864  283132 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14258.pem && ln -fs /usr/share/ca-certificates/14258.pem /etc/ssl/certs/14258.pem"
	I1126 20:22:48.532096  283132 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14258.pem
	I1126 20:22:48.536714  283132 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 26 19:41 /usr/share/ca-certificates/14258.pem
	I1126 20:22:48.536763  283132 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14258.pem
	I1126 20:22:48.592642  283132 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14258.pem /etc/ssl/certs/51391683.0"
	I1126 20:22:48.602562  283132 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1126 20:22:48.607725  283132 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1126 20:22:48.668271  283132 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1126 20:22:48.723058  283132 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1126 20:22:48.766993  283132 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1126 20:22:48.809051  283132 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1126 20:22:48.869800  283132 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1126 20:22:48.933325  283132 kubeadm.go:401] StartCluster: {Name:newest-cni-297942 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-297942 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:22:48.933433  283132 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 20:22:48.933507  283132 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 20:22:48.969182  283132 cri.go:89] found id: ""
	I1126 20:22:48.969273  283132 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1126 20:22:48.980080  283132 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1126 20:22:48.980099  283132 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1126 20:22:48.980145  283132 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1126 20:22:48.990153  283132 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1126 20:22:48.991382  283132 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-297942" does not appear in /home/jenkins/minikube-integration/21974-10722/kubeconfig
	I1126 20:22:48.992253  283132 kubeconfig.go:62] /home/jenkins/minikube-integration/21974-10722/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-297942" cluster setting kubeconfig missing "newest-cni-297942" context setting]
	I1126 20:22:48.993562  283132 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/kubeconfig: {Name:mk6fc897a3190afd853d8f239c80394df10dbd39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:22:48.995871  283132 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1126 20:22:49.006243  283132 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1126 20:22:49.006272  283132 kubeadm.go:602] duration metric: took 26.166791ms to restartPrimaryControlPlane
	I1126 20:22:49.006282  283132 kubeadm.go:403] duration metric: took 72.966028ms to StartCluster
	I1126 20:22:49.006297  283132 settings.go:142] acquiring lock: {Name:mkeab3f1dbcfb88a16fda32bff13c520a9a811ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:22:49.006353  283132 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21974-10722/kubeconfig
	I1126 20:22:49.008962  283132 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/kubeconfig: {Name:mk6fc897a3190afd853d8f239c80394df10dbd39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:22:49.010081  283132 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1126 20:22:49.010330  283132 config.go:182] Loaded profile config "newest-cni-297942": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:22:49.010385  283132 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1126 20:22:49.010493  283132 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-297942"
	I1126 20:22:49.010512  283132 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-297942"
	W1126 20:22:49.010523  283132 addons.go:248] addon storage-provisioner should already be in state true
	I1126 20:22:49.010550  283132 host.go:66] Checking if "newest-cni-297942" exists ...
	I1126 20:22:49.010793  283132 addons.go:70] Setting dashboard=true in profile "newest-cni-297942"
	I1126 20:22:49.010822  283132 addons.go:70] Setting default-storageclass=true in profile "newest-cni-297942"
	I1126 20:22:49.010829  283132 addons.go:239] Setting addon dashboard=true in "newest-cni-297942"
	W1126 20:22:49.010840  283132 addons.go:248] addon dashboard should already be in state true
	I1126 20:22:49.010844  283132 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-297942"
	I1126 20:22:49.010864  283132 host.go:66] Checking if "newest-cni-297942" exists ...
	I1126 20:22:49.011039  283132 cli_runner.go:164] Run: docker container inspect newest-cni-297942 --format={{.State.Status}}
	I1126 20:22:49.011163  283132 cli_runner.go:164] Run: docker container inspect newest-cni-297942 --format={{.State.Status}}
	I1126 20:22:49.011281  283132 cli_runner.go:164] Run: docker container inspect newest-cni-297942 --format={{.State.Status}}
	I1126 20:22:49.039942  283132 addons.go:239] Setting addon default-storageclass=true in "newest-cni-297942"
	W1126 20:22:49.039969  283132 addons.go:248] addon default-storageclass should already be in state true
	I1126 20:22:49.039995  283132 host.go:66] Checking if "newest-cni-297942" exists ...
	I1126 20:22:49.040473  283132 cli_runner.go:164] Run: docker container inspect newest-cni-297942 --format={{.State.Status}}
	I1126 20:22:49.062659  283132 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1126 20:22:49.062681  283132 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1126 20:22:49.062734  283132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-297942
	I1126 20:22:49.071753  283132 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1126 20:22:49.071754  283132 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1126 20:22:49.071760  283132 out.go:179] * Verifying Kubernetes components...
	I1126 20:22:49.083205  283132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/newest-cni-297942/id_rsa Username:docker}
	I1126 20:22:49.093615  283132 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:22:49.093646  283132 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1126 20:22:49.093716  283132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-297942
	I1126 20:22:49.094772  283132 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:22:49.095752  283132 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1126 20:22:49.098197  283132 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1126 20:22:49.098216  283132 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1126 20:22:49.098302  283132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-297942
	I1126 20:22:49.120042  283132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/newest-cni-297942/id_rsa Username:docker}
	I1126 20:22:49.124517  283132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/newest-cni-297942/id_rsa Username:docker}
	I1126 20:22:49.223673  283132 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1126 20:22:49.233917  283132 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:22:49.244980  283132 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:22:49.257038  283132 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1126 20:22:49.257061  283132 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1126 20:22:49.295636  283132 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1126 20:22:49.295664  283132 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	W1126 20:22:49.312492  283132 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1126 20:22:49.312533  283132 retry.go:31] will retry after 141.575876ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1126 20:22:49.312612  283132 api_server.go:52] waiting for apiserver process to appear ...
	I1126 20:22:49.312669  283132 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:22:49.321556  283132 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1126 20:22:49.321592  283132 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	W1126 20:22:49.344947  283132 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1126 20:22:49.344982  283132 retry.go:31] will retry after 218.049714ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1126 20:22:49.345028  283132 api_server.go:72] duration metric: took 334.915012ms to wait for apiserver process to appear ...
	I1126 20:22:49.345038  283132 api_server.go:88] waiting for apiserver healthz status ...
	I1126 20:22:49.345054  283132 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1126 20:22:49.345834  283132 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1126 20:22:49.345938  283132 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1126 20:22:49.346111  283132 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1126 20:22:49.369397  283132 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1126 20:22:49.369420  283132 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1126 20:22:49.390504  283132 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1126 20:22:49.390683  283132 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1126 20:22:49.408410  283132 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1126 20:22:49.408441  283132 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1126 20:22:49.426482  283132 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1126 20:22:49.426503  283132 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1126 20:22:49.442793  283132 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1126 20:22:49.442870  283132 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1126 20:22:49.454437  283132 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1126 20:22:49.461179  283132 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1126 20:22:49.563685  283132 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:22:49.845496  283132 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	W1126 20:22:48.197694  279050 pod_ready.go:104] pod "coredns-66bc5c9577-wl4xp" is not "Ready", error: <nil>
	W1126 20:22:50.201188  279050 pod_ready.go:104] pod "coredns-66bc5c9577-wl4xp" is not "Ready", error: <nil>
	I1126 20:22:51.277974  283132 api_server.go:279] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1126 20:22:51.278018  283132 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1126 20:22:51.278039  283132 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1126 20:22:51.287748  283132 api_server.go:279] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1126 20:22:51.287777  283132 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1126 20:22:51.345992  283132 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1126 20:22:51.353164  283132 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1126 20:22:51.353197  283132 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1126 20:22:51.403236  283132 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (1.948765551s)
	I1126 20:22:51.845876  283132 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1126 20:22:51.854352  283132 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1126 20:22:51.854381  283132 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1126 20:22:51.937991  283132 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.476761053s)
	I1126 20:22:51.940235  283132 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-297942 addons enable metrics-server
	
	I1126 20:22:52.048989  283132 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.485263917s)
	I1126 20:22:52.050773  283132 out.go:179] * Enabled addons: default-storageclass, dashboard, storage-provisioner
	I1126 20:22:47.665529  281230 addons.go:530] duration metric: took 2.924403622s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1126 20:22:48.147073  281230 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1126 20:22:48.153314  281230 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1126 20:22:48.154522  281230 api_server.go:141] control plane version: v1.34.1
	I1126 20:22:48.154551  281230 api_server.go:131] duration metric: took 507.601137ms to wait for apiserver health ...
	I1126 20:22:48.154562  281230 system_pods.go:43] waiting for kube-system pods to appear ...
	I1126 20:22:48.159761  281230 system_pods.go:59] 8 kube-system pods found
	I1126 20:22:48.159808  281230 system_pods.go:61] "coredns-66bc5c9577-s8rrr" [fb1d5777-ea89-40cb-ae5c-dd8bde47f3de] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:22:48.159819  281230 system_pods.go:61] "etcd-embed-certs-949294" [ae44b779-edef-49c6-8b68-b6114e9a4d68] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1126 20:22:48.159827  281230 system_pods.go:61] "kindnet-9546l" [5f44d5e6-677c-4df7-9534-bfdf1e6b06b4] Running
	I1126 20:22:48.159836  281230 system_pods.go:61] "kube-apiserver-embed-certs-949294" [25d397fc-5140-407f-80f8-e05072c2d44b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1126 20:22:48.159858  281230 system_pods.go:61] "kube-controller-manager-embed-certs-949294" [9f206785-00f8-4fb7-a52c-95ea02516271] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1126 20:22:48.159867  281230 system_pods.go:61] "kube-proxy-qnjvr" [d9dba8a9-9c13-46e2-9ada-a2b8daca8d73] Running
	I1126 20:22:48.159875  281230 system_pods.go:61] "kube-scheduler-embed-certs-949294" [26c4da52-2b0a-4a83-9549-de780ffeddb0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1126 20:22:48.159880  281230 system_pods.go:61] "storage-provisioner" [ad12d5e5-d681-4dfc-9970-d2340ac55ed7] Running
	I1126 20:22:48.159888  281230 system_pods.go:74] duration metric: took 5.318838ms to wait for pod list to return data ...
	I1126 20:22:48.159896  281230 default_sa.go:34] waiting for default service account to be created ...
	I1126 20:22:48.163237  281230 default_sa.go:45] found service account: "default"
	I1126 20:22:48.163425  281230 default_sa.go:55] duration metric: took 3.520246ms for default service account to be created ...
	I1126 20:22:48.163453  281230 system_pods.go:116] waiting for k8s-apps to be running ...
	I1126 20:22:48.167512  281230 system_pods.go:86] 8 kube-system pods found
	I1126 20:22:48.168002  281230 system_pods.go:89] "coredns-66bc5c9577-s8rrr" [fb1d5777-ea89-40cb-ae5c-dd8bde47f3de] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:22:48.168069  281230 system_pods.go:89] "etcd-embed-certs-949294" [ae44b779-edef-49c6-8b68-b6114e9a4d68] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1126 20:22:48.168093  281230 system_pods.go:89] "kindnet-9546l" [5f44d5e6-677c-4df7-9534-bfdf1e6b06b4] Running
	I1126 20:22:48.168114  281230 system_pods.go:89] "kube-apiserver-embed-certs-949294" [25d397fc-5140-407f-80f8-e05072c2d44b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1126 20:22:48.168149  281230 system_pods.go:89] "kube-controller-manager-embed-certs-949294" [9f206785-00f8-4fb7-a52c-95ea02516271] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1126 20:22:48.168176  281230 system_pods.go:89] "kube-proxy-qnjvr" [d9dba8a9-9c13-46e2-9ada-a2b8daca8d73] Running
	I1126 20:22:48.168197  281230 system_pods.go:89] "kube-scheduler-embed-certs-949294" [26c4da52-2b0a-4a83-9549-de780ffeddb0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1126 20:22:48.168213  281230 system_pods.go:89] "storage-provisioner" [ad12d5e5-d681-4dfc-9970-d2340ac55ed7] Running
	I1126 20:22:48.168233  281230 system_pods.go:126] duration metric: took 4.719858ms to wait for k8s-apps to be running ...
	I1126 20:22:48.168284  281230 system_svc.go:44] waiting for kubelet service to be running ....
	I1126 20:22:48.168353  281230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:22:48.189258  281230 system_svc.go:56] duration metric: took 20.967364ms WaitForService to wait for kubelet
	I1126 20:22:48.189288  281230 kubeadm.go:587] duration metric: took 3.448403882s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1126 20:22:48.189311  281230 node_conditions.go:102] verifying NodePressure condition ...
	I1126 20:22:48.194077  281230 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1126 20:22:48.194116  281230 node_conditions.go:123] node cpu capacity is 8
	I1126 20:22:48.194135  281230 node_conditions.go:105] duration metric: took 4.818329ms to run NodePressure ...
	I1126 20:22:48.194150  281230 start.go:242] waiting for startup goroutines ...
	I1126 20:22:48.194164  281230 start.go:247] waiting for cluster config update ...
	I1126 20:22:48.194178  281230 start.go:256] writing updated cluster config ...
	I1126 20:22:48.194454  281230 ssh_runner.go:195] Run: rm -f paused
	I1126 20:22:48.199326  281230 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 20:22:48.204363  281230 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-s8rrr" in "kube-system" namespace to be "Ready" or be gone ...
	W1126 20:22:50.231611  281230 pod_ready.go:104] pod "coredns-66bc5c9577-s8rrr" is not "Ready", error: <nil>
	I1126 20:22:52.051919  283132 addons.go:530] duration metric: took 3.041532347s for enable addons: enabled=[default-storageclass dashboard storage-provisioner]
	I1126 20:22:52.345587  283132 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1126 20:22:52.350543  283132 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1126 20:22:52.350570  283132 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1126 20:22:52.846025  283132 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1126 20:22:52.851313  283132 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1126 20:22:52.852557  283132 api_server.go:141] control plane version: v1.34.1
	I1126 20:22:52.852582  283132 api_server.go:131] duration metric: took 3.507536375s to wait for apiserver health ...
	I1126 20:22:52.852593  283132 system_pods.go:43] waiting for kube-system pods to appear ...
	I1126 20:22:52.856745  283132 system_pods.go:59] 8 kube-system pods found
	I1126 20:22:52.856818  283132 system_pods.go:61] "coredns-66bc5c9577-bnszr" [ddf077eb-a9c4-42f2-a9b7-0aced551aa38] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1126 20:22:52.856864  283132 system_pods.go:61] "etcd-newest-cni-297942" [6520dcdd-9b71-4c83-8e54-7421dd7034af] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1126 20:22:52.856881  283132 system_pods.go:61] "kindnet-wlhp7" [a6a459a7-87d9-4628-ad09-7e6e8d8445da] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1126 20:22:52.856908  283132 system_pods.go:61] "kube-apiserver-newest-cni-297942" [7c910df8-6020-46fb-a380-09a0698b3720] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1126 20:22:52.856922  283132 system_pods.go:61] "kube-controller-manager-newest-cni-297942" [66f96670-85f0-47d1-859b-4844b80909d4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1126 20:22:52.856931  283132 system_pods.go:61] "kube-proxy-lx6vw" [6e8b3fed-5b44-42fd-9259-f59bfc5b1f0c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1126 20:22:52.856939  283132 system_pods.go:61] "kube-scheduler-newest-cni-297942" [4d59e692-80ac-4baa-9316-d8930f423531] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1126 20:22:52.856947  283132 system_pods.go:61] "storage-provisioner" [815d8b30-f9a4-4565-9f15-f45940446bd1] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1126 20:22:52.856955  283132 system_pods.go:74] duration metric: took 4.355286ms to wait for pod list to return data ...
	I1126 20:22:52.856965  283132 default_sa.go:34] waiting for default service account to be created ...
	I1126 20:22:52.859730  283132 default_sa.go:45] found service account: "default"
	I1126 20:22:52.859762  283132 default_sa.go:55] duration metric: took 2.779407ms for default service account to be created ...
	I1126 20:22:52.859775  283132 kubeadm.go:587] duration metric: took 3.849662669s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1126 20:22:52.859793  283132 node_conditions.go:102] verifying NodePressure condition ...
	I1126 20:22:52.862559  283132 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1126 20:22:52.862585  283132 node_conditions.go:123] node cpu capacity is 8
	I1126 20:22:52.862603  283132 node_conditions.go:105] duration metric: took 2.80479ms to run NodePressure ...
	I1126 20:22:52.862617  283132 start.go:242] waiting for startup goroutines ...
	I1126 20:22:52.862626  283132 start.go:247] waiting for cluster config update ...
	I1126 20:22:52.862639  283132 start.go:256] writing updated cluster config ...
	I1126 20:22:52.863068  283132 ssh_runner.go:195] Run: rm -f paused
	I1126 20:22:52.938360  283132 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1126 20:22:52.940104  283132 out.go:179] * Done! kubectl is now configured to use "newest-cni-297942" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 26 20:22:42 default-k8s-diff-port-178152 crio[781]: time="2025-11-26T20:22:42.606223817Z" level=info msg="Starting container: d886b536d688258b818c3896cbbdffb9e9ea64dbcf61f25bbe964be4cd6502c3" id=b3a97dc1-bca0-49eb-a93d-1d4ee946c482 name=/runtime.v1.RuntimeService/StartContainer
	Nov 26 20:22:42 default-k8s-diff-port-178152 crio[781]: time="2025-11-26T20:22:42.60869421Z" level=info msg="Started container" PID=1844 containerID=d886b536d688258b818c3896cbbdffb9e9ea64dbcf61f25bbe964be4cd6502c3 description=kube-system/coredns-66bc5c9577-tpmmm/coredns id=b3a97dc1-bca0-49eb-a93d-1d4ee946c482 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e811e49180ef6f0ab9709fa87d44f4575385b51db06fdede3253e1504c287ce8
	Nov 26 20:22:45 default-k8s-diff-port-178152 crio[781]: time="2025-11-26T20:22:45.588586536Z" level=info msg="Running pod sandbox: default/busybox/POD" id=464a97b4-8832-42b5-9ad8-cc90705de182 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 26 20:22:45 default-k8s-diff-port-178152 crio[781]: time="2025-11-26T20:22:45.589081933Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:22:45 default-k8s-diff-port-178152 crio[781]: time="2025-11-26T20:22:45.594704262Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:1dcbfb7471183d9b9f507286d8fd31d31ae681379747f773be793850cfcd2f80 UID:784f93fd-b5f3-4353-977c-1c2395ef08b7 NetNS:/var/run/netns/a19fad8c-f92e-47f7-ac05-21ff836247dc Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008ad70}] Aliases:map[]}"
	Nov 26 20:22:45 default-k8s-diff-port-178152 crio[781]: time="2025-11-26T20:22:45.59482776Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 26 20:22:45 default-k8s-diff-port-178152 crio[781]: time="2025-11-26T20:22:45.609283407Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:1dcbfb7471183d9b9f507286d8fd31d31ae681379747f773be793850cfcd2f80 UID:784f93fd-b5f3-4353-977c-1c2395ef08b7 NetNS:/var/run/netns/a19fad8c-f92e-47f7-ac05-21ff836247dc Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008ad70}] Aliases:map[]}"
	Nov 26 20:22:45 default-k8s-diff-port-178152 crio[781]: time="2025-11-26T20:22:45.609503678Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 26 20:22:45 default-k8s-diff-port-178152 crio[781]: time="2025-11-26T20:22:45.610630083Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 26 20:22:45 default-k8s-diff-port-178152 crio[781]: time="2025-11-26T20:22:45.611980533Z" level=info msg="Ran pod sandbox 1dcbfb7471183d9b9f507286d8fd31d31ae681379747f773be793850cfcd2f80 with infra container: default/busybox/POD" id=464a97b4-8832-42b5-9ad8-cc90705de182 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 26 20:22:45 default-k8s-diff-port-178152 crio[781]: time="2025-11-26T20:22:45.613260085Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=84fc2783-63d6-45bc-842d-a3748d2cce30 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:22:45 default-k8s-diff-port-178152 crio[781]: time="2025-11-26T20:22:45.613394725Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=84fc2783-63d6-45bc-842d-a3748d2cce30 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:22:45 default-k8s-diff-port-178152 crio[781]: time="2025-11-26T20:22:45.613441505Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=84fc2783-63d6-45bc-842d-a3748d2cce30 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:22:45 default-k8s-diff-port-178152 crio[781]: time="2025-11-26T20:22:45.614903619Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=99350d6c-23dd-43f0-8b39-ed394585e6bf name=/runtime.v1.ImageService/PullImage
	Nov 26 20:22:45 default-k8s-diff-port-178152 crio[781]: time="2025-11-26T20:22:45.62213102Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 26 20:22:46 default-k8s-diff-port-178152 crio[781]: time="2025-11-26T20:22:46.430300212Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=99350d6c-23dd-43f0-8b39-ed394585e6bf name=/runtime.v1.ImageService/PullImage
	Nov 26 20:22:46 default-k8s-diff-port-178152 crio[781]: time="2025-11-26T20:22:46.431199888Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a803a905-f620-48f2-8943-5d703380d938 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:22:46 default-k8s-diff-port-178152 crio[781]: time="2025-11-26T20:22:46.432851099Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e1e626e5-3244-4523-99f1-b62714114776 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:22:46 default-k8s-diff-port-178152 crio[781]: time="2025-11-26T20:22:46.436353143Z" level=info msg="Creating container: default/busybox/busybox" id=01b3129d-8426-4de5-b20b-c26f9f73b5b1 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:22:46 default-k8s-diff-port-178152 crio[781]: time="2025-11-26T20:22:46.436620743Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:22:46 default-k8s-diff-port-178152 crio[781]: time="2025-11-26T20:22:46.441441314Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:22:46 default-k8s-diff-port-178152 crio[781]: time="2025-11-26T20:22:46.441906016Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:22:46 default-k8s-diff-port-178152 crio[781]: time="2025-11-26T20:22:46.479116379Z" level=info msg="Created container 9fb3633b8dc51401b77794213cd847a6a72013510b8d26cac82d90b055bb6126: default/busybox/busybox" id=01b3129d-8426-4de5-b20b-c26f9f73b5b1 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:22:46 default-k8s-diff-port-178152 crio[781]: time="2025-11-26T20:22:46.479775777Z" level=info msg="Starting container: 9fb3633b8dc51401b77794213cd847a6a72013510b8d26cac82d90b055bb6126" id=2aadb63d-be5c-4d0d-99c6-c924f2394c31 name=/runtime.v1.RuntimeService/StartContainer
	Nov 26 20:22:46 default-k8s-diff-port-178152 crio[781]: time="2025-11-26T20:22:46.48190404Z" level=info msg="Started container" PID=1921 containerID=9fb3633b8dc51401b77794213cd847a6a72013510b8d26cac82d90b055bb6126 description=default/busybox/busybox id=2aadb63d-be5c-4d0d-99c6-c924f2394c31 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1dcbfb7471183d9b9f507286d8fd31d31ae681379747f773be793850cfcd2f80
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	9fb3633b8dc51       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   1dcbfb7471183       busybox                                                default
	d886b536d6882       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 seconds ago      Running             coredns                   0                   e811e49180ef6       coredns-66bc5c9577-tpmmm                               kube-system
	13ebc2a72a6f0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago      Running             storage-provisioner       0                   ae89af712a4a0       storage-provisioner                                    kube-system
	03484ce3d3106       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      22 seconds ago      Running             kindnet-cni               0                   7efc11d50867e       kindnet-bmzz2                                          kube-system
	7118c7f588e36       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      22 seconds ago      Running             kube-proxy                0                   15debbcc988a7       kube-proxy-vd7fp                                       kube-system
	504837bd07e3b       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      33 seconds ago      Running             kube-scheduler            0                   6e16ec5143dc0       kube-scheduler-default-k8s-diff-port-178152            kube-system
	d4adf2024a3cd       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      33 seconds ago      Running             kube-controller-manager   0                   2dc8111f156fc       kube-controller-manager-default-k8s-diff-port-178152   kube-system
	dfbfd3298154a       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      33 seconds ago      Running             kube-apiserver            0                   8cb102cedcc96       kube-apiserver-default-k8s-diff-port-178152            kube-system
	78c7f96f2de7b       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      33 seconds ago      Running             etcd                      0                   2d3222fb14b03       etcd-default-k8s-diff-port-178152                      kube-system
	
	
	==> coredns [d886b536d688258b818c3896cbbdffb9e9ea64dbcf61f25bbe964be4cd6502c3] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45618 - 33763 "HINFO IN 2801054155613499879.3676868205163325008. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.479277024s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-178152
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-178152
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970
	                    minikube.k8s.io/name=default-k8s-diff-port-178152
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_26T20_22_26_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 26 Nov 2025 20:22:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-178152
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 26 Nov 2025 20:22:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 26 Nov 2025 20:22:42 +0000   Wed, 26 Nov 2025 20:22:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 26 Nov 2025 20:22:42 +0000   Wed, 26 Nov 2025 20:22:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 26 Nov 2025 20:22:42 +0000   Wed, 26 Nov 2025 20:22:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 26 Nov 2025 20:22:42 +0000   Wed, 26 Nov 2025 20:22:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-178152
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                d91795ef-51fb-4835-abf4-4b138b22a490
	  Boot ID:                    4ab83601-3d95-4490-96bc-14b416ba2714
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-tpmmm                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     23s
	  kube-system                 etcd-default-k8s-diff-port-178152                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         29s
	  kube-system                 kindnet-bmzz2                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-default-k8s-diff-port-178152             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-178152    200m (2%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-proxy-vd7fp                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-default-k8s-diff-port-178152             100m (1%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 22s   kube-proxy       
	  Normal  Starting                 29s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29s   kubelet          Node default-k8s-diff-port-178152 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s   kubelet          Node default-k8s-diff-port-178152 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s   kubelet          Node default-k8s-diff-port-178152 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s   node-controller  Node default-k8s-diff-port-178152 event: Registered Node default-k8s-diff-port-178152 in Controller
	  Normal  NodeReady                12s   kubelet          Node default-k8s-diff-port-178152 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.091832] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023778] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.010162] kauditd_printk_skb: 47 callbacks suppressed
	[Nov26 19:37] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.011177] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.022896] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.024869] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.022915] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.023864] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +2.047793] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +4.032568] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +8.126198] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[ +16.382331] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[Nov26 19:38] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	
	
	==> etcd [78c7f96f2de7bf7404ad0f8658dcf319ada95ae9cf1037761ed53006cb6d8795] <==
	{"level":"warn","ts":"2025-11-26T20:22:22.208417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:22.218185Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:22.224205Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:22.230213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:22.236182Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:22.242210Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:22.249937Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:22.256114Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:22.262219Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:22.269300Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:22.286873Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:22.293991Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:22.301529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:22.308864Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:22.314971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:22.322058Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:22.328299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:22.337835Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:22.345114Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:22.351583Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:22.358395Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:22.375320Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:22.381879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:22.388684Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:22.429962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60550","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:22:54 up  1:05,  0 user,  load average: 4.79, 3.36, 2.14
	Linux default-k8s-diff-port-178152 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [03484ce3d3106afae33be14c3b6ac1518a0038227f0ccafe718452fd0288516d] <==
	I1126 20:22:31.506226       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1126 20:22:31.600346       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1126 20:22:31.600524       1 main.go:148] setting mtu 1500 for CNI 
	I1126 20:22:31.600544       1 main.go:178] kindnetd IP family: "ipv4"
	I1126 20:22:31.600572       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-26T20:22:31Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1126 20:22:31.801398       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1126 20:22:31.802136       1 controller.go:381] "Waiting for informer caches to sync"
	I1126 20:22:31.802171       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1126 20:22:31.802310       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1126 20:22:32.202555       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1126 20:22:32.202579       1 metrics.go:72] Registering metrics
	I1126 20:22:32.202677       1 controller.go:711] "Syncing nftables rules"
	I1126 20:22:41.803539       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1126 20:22:41.803673       1 main.go:301] handling current node
	I1126 20:22:51.803037       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1126 20:22:51.803086       1 main.go:301] handling current node
	
	
	==> kube-apiserver [dfbfd3298154a84b54f94ea27ebdaa8c7bb40ff502326db082e69643061999d9] <==
	I1126 20:22:22.911724       1 controller.go:667] quota admission added evaluator for: namespaces
	I1126 20:22:22.914280       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1126 20:22:22.914284       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1126 20:22:22.915628       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1126 20:22:22.919375       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1126 20:22:22.919592       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1126 20:22:23.095857       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1126 20:22:23.815364       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1126 20:22:23.818955       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1126 20:22:23.818970       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1126 20:22:24.250493       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1126 20:22:24.286378       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1126 20:22:24.418227       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1126 20:22:24.423762       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1126 20:22:24.424626       1 controller.go:667] quota admission added evaluator for: endpoints
	I1126 20:22:24.428125       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1126 20:22:24.849613       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1126 20:22:25.368989       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1126 20:22:25.381842       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1126 20:22:25.393184       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1126 20:22:30.502155       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1126 20:22:30.505366       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1126 20:22:30.602266       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1126 20:22:30.910793       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1126 20:22:52.397538       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8444->192.168.85.1:42596: use of closed network connection
	
	
	==> kube-controller-manager [d4adf2024a3cdcd3541e1b20b9dae71e6822c8d28e4d13edeac7581a960711b0] <==
	I1126 20:22:29.849826       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1126 20:22:29.849916       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1126 20:22:29.849935       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1126 20:22:29.849976       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1126 20:22:29.849986       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1126 20:22:29.849994       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1126 20:22:29.850006       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1126 20:22:29.850016       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1126 20:22:29.851766       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1126 20:22:29.851905       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1126 20:22:29.852163       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1126 20:22:29.852244       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1126 20:22:29.852269       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1126 20:22:29.852293       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1126 20:22:29.852679       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1126 20:22:29.853088       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1126 20:22:29.853133       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1126 20:22:29.853651       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1126 20:22:29.855898       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1126 20:22:29.858744       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1126 20:22:29.866431       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-178152" podCIDRs=["10.244.0.0/24"]
	I1126 20:22:29.866638       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1126 20:22:29.868847       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1126 20:22:29.888256       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1126 20:22:44.800486       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [7118c7f588e36cf0beeba7a099e42f783a31b86abfebfb13004f8f5119802f69] <==
	I1126 20:22:31.388884       1 server_linux.go:53] "Using iptables proxy"
	I1126 20:22:31.450433       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1126 20:22:31.550681       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1126 20:22:31.550734       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1126 20:22:31.550834       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1126 20:22:31.572201       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1126 20:22:31.572255       1 server_linux.go:132] "Using iptables Proxier"
	I1126 20:22:31.578009       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1126 20:22:31.578440       1 server.go:527] "Version info" version="v1.34.1"
	I1126 20:22:31.578530       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 20:22:31.580162       1 config.go:106] "Starting endpoint slice config controller"
	I1126 20:22:31.581053       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1126 20:22:31.580432       1 config.go:200] "Starting service config controller"
	I1126 20:22:31.581271       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1126 20:22:31.580455       1 config.go:403] "Starting serviceCIDR config controller"
	I1126 20:22:31.580213       1 config.go:309] "Starting node config controller"
	I1126 20:22:31.581409       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1126 20:22:31.581428       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1126 20:22:31.584043       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1126 20:22:31.681430       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1126 20:22:31.681430       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1126 20:22:31.684198       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [504837bd07e3b512ef4123a0c4a9fc38149919fd7bfa9fa93f44dc1492cb1563] <==
	E1126 20:22:22.855252       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1126 20:22:22.855267       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1126 20:22:22.855326       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1126 20:22:22.855372       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1126 20:22:22.855440       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1126 20:22:22.855446       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1126 20:22:22.855508       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1126 20:22:22.855577       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1126 20:22:22.855612       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1126 20:22:22.855610       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1126 20:22:22.855643       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1126 20:22:22.855665       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1126 20:22:23.698762       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1126 20:22:23.703859       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1126 20:22:23.792632       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1126 20:22:23.813616       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1126 20:22:23.848679       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1126 20:22:23.865082       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1126 20:22:23.904991       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1126 20:22:23.944013       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1126 20:22:23.990361       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1126 20:22:23.995484       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1126 20:22:24.045609       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1126 20:22:24.082442       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1126 20:22:24.451621       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 26 20:22:26 default-k8s-diff-port-178152 kubelet[1319]: E1126 20:22:26.209854    1319 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-default-k8s-diff-port-178152\" already exists" pod="kube-system/kube-apiserver-default-k8s-diff-port-178152"
	Nov 26 20:22:26 default-k8s-diff-port-178152 kubelet[1319]: I1126 20:22:26.239530    1319 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-178152" podStartSLOduration=1.239506258 podStartE2EDuration="1.239506258s" podCreationTimestamp="2025-11-26 20:22:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 20:22:26.226423358 +0000 UTC m=+1.118384282" watchObservedRunningTime="2025-11-26 20:22:26.239506258 +0000 UTC m=+1.131467181"
	Nov 26 20:22:26 default-k8s-diff-port-178152 kubelet[1319]: I1126 20:22:26.250237    1319 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-178152" podStartSLOduration=1.250218001 podStartE2EDuration="1.250218001s" podCreationTimestamp="2025-11-26 20:22:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 20:22:26.240327295 +0000 UTC m=+1.132288217" watchObservedRunningTime="2025-11-26 20:22:26.250218001 +0000 UTC m=+1.142178930"
	Nov 26 20:22:26 default-k8s-diff-port-178152 kubelet[1319]: I1126 20:22:26.264843    1319 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-178152" podStartSLOduration=1.264824344 podStartE2EDuration="1.264824344s" podCreationTimestamp="2025-11-26 20:22:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 20:22:26.250712303 +0000 UTC m=+1.142673223" watchObservedRunningTime="2025-11-26 20:22:26.264824344 +0000 UTC m=+1.156785271"
	Nov 26 20:22:26 default-k8s-diff-port-178152 kubelet[1319]: I1126 20:22:26.275567    1319 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-178152" podStartSLOduration=1.275549754 podStartE2EDuration="1.275549754s" podCreationTimestamp="2025-11-26 20:22:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 20:22:26.264820999 +0000 UTC m=+1.156781924" watchObservedRunningTime="2025-11-26 20:22:26.275549754 +0000 UTC m=+1.167510682"
	Nov 26 20:22:29 default-k8s-diff-port-178152 kubelet[1319]: I1126 20:22:29.909953    1319 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 26 20:22:29 default-k8s-diff-port-178152 kubelet[1319]: I1126 20:22:29.910659    1319 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 26 20:22:31 default-k8s-diff-port-178152 kubelet[1319]: I1126 20:22:31.011335    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/37371ddf-6fde-4f46-a877-97f4112ff1b2-lib-modules\") pod \"kube-proxy-vd7fp\" (UID: \"37371ddf-6fde-4f46-a877-97f4112ff1b2\") " pod="kube-system/kube-proxy-vd7fp"
	Nov 26 20:22:31 default-k8s-diff-port-178152 kubelet[1319]: I1126 20:22:31.011410    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/ad4ae092-70cd-48b0-9099-854ccce3329d-cni-cfg\") pod \"kindnet-bmzz2\" (UID: \"ad4ae092-70cd-48b0-9099-854ccce3329d\") " pod="kube-system/kindnet-bmzz2"
	Nov 26 20:22:31 default-k8s-diff-port-178152 kubelet[1319]: I1126 20:22:31.011437    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ad4ae092-70cd-48b0-9099-854ccce3329d-xtables-lock\") pod \"kindnet-bmzz2\" (UID: \"ad4ae092-70cd-48b0-9099-854ccce3329d\") " pod="kube-system/kindnet-bmzz2"
	Nov 26 20:22:31 default-k8s-diff-port-178152 kubelet[1319]: I1126 20:22:31.011475    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vk84z\" (UniqueName: \"kubernetes.io/projected/ad4ae092-70cd-48b0-9099-854ccce3329d-kube-api-access-vk84z\") pod \"kindnet-bmzz2\" (UID: \"ad4ae092-70cd-48b0-9099-854ccce3329d\") " pod="kube-system/kindnet-bmzz2"
	Nov 26 20:22:31 default-k8s-diff-port-178152 kubelet[1319]: I1126 20:22:31.011560    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v94pz\" (UniqueName: \"kubernetes.io/projected/37371ddf-6fde-4f46-a877-97f4112ff1b2-kube-api-access-v94pz\") pod \"kube-proxy-vd7fp\" (UID: \"37371ddf-6fde-4f46-a877-97f4112ff1b2\") " pod="kube-system/kube-proxy-vd7fp"
	Nov 26 20:22:31 default-k8s-diff-port-178152 kubelet[1319]: I1126 20:22:31.011608    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/37371ddf-6fde-4f46-a877-97f4112ff1b2-xtables-lock\") pod \"kube-proxy-vd7fp\" (UID: \"37371ddf-6fde-4f46-a877-97f4112ff1b2\") " pod="kube-system/kube-proxy-vd7fp"
	Nov 26 20:22:31 default-k8s-diff-port-178152 kubelet[1319]: I1126 20:22:31.011638    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ad4ae092-70cd-48b0-9099-854ccce3329d-lib-modules\") pod \"kindnet-bmzz2\" (UID: \"ad4ae092-70cd-48b0-9099-854ccce3329d\") " pod="kube-system/kindnet-bmzz2"
	Nov 26 20:22:31 default-k8s-diff-port-178152 kubelet[1319]: I1126 20:22:31.011663    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/37371ddf-6fde-4f46-a877-97f4112ff1b2-kube-proxy\") pod \"kube-proxy-vd7fp\" (UID: \"37371ddf-6fde-4f46-a877-97f4112ff1b2\") " pod="kube-system/kube-proxy-vd7fp"
	Nov 26 20:22:32 default-k8s-diff-port-178152 kubelet[1319]: I1126 20:22:32.237644    1319 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-bmzz2" podStartSLOduration=2.237592767 podStartE2EDuration="2.237592767s" podCreationTimestamp="2025-11-26 20:22:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 20:22:32.226283721 +0000 UTC m=+7.118244648" watchObservedRunningTime="2025-11-26 20:22:32.237592767 +0000 UTC m=+7.129553696"
	Nov 26 20:22:32 default-k8s-diff-port-178152 kubelet[1319]: I1126 20:22:32.954452    1319 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vd7fp" podStartSLOduration=2.954429352 podStartE2EDuration="2.954429352s" podCreationTimestamp="2025-11-26 20:22:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 20:22:32.238280247 +0000 UTC m=+7.130241169" watchObservedRunningTime="2025-11-26 20:22:32.954429352 +0000 UTC m=+7.846390279"
	Nov 26 20:22:42 default-k8s-diff-port-178152 kubelet[1319]: I1126 20:22:42.207222    1319 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 26 20:22:42 default-k8s-diff-port-178152 kubelet[1319]: I1126 20:22:42.297178    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbdtz\" (UniqueName: \"kubernetes.io/projected/20166f90-76ba-4092-aab9-29683f4fc146-kube-api-access-bbdtz\") pod \"coredns-66bc5c9577-tpmmm\" (UID: \"20166f90-76ba-4092-aab9-29683f4fc146\") " pod="kube-system/coredns-66bc5c9577-tpmmm"
	Nov 26 20:22:42 default-k8s-diff-port-178152 kubelet[1319]: I1126 20:22:42.297241    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/0ed42547-a316-4970-b4e7-f2157c68ac06-tmp\") pod \"storage-provisioner\" (UID: \"0ed42547-a316-4970-b4e7-f2157c68ac06\") " pod="kube-system/storage-provisioner"
	Nov 26 20:22:42 default-k8s-diff-port-178152 kubelet[1319]: I1126 20:22:42.297275    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20166f90-76ba-4092-aab9-29683f4fc146-config-volume\") pod \"coredns-66bc5c9577-tpmmm\" (UID: \"20166f90-76ba-4092-aab9-29683f4fc146\") " pod="kube-system/coredns-66bc5c9577-tpmmm"
	Nov 26 20:22:42 default-k8s-diff-port-178152 kubelet[1319]: I1126 20:22:42.297360    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl6w6\" (UniqueName: \"kubernetes.io/projected/0ed42547-a316-4970-b4e7-f2157c68ac06-kube-api-access-rl6w6\") pod \"storage-provisioner\" (UID: \"0ed42547-a316-4970-b4e7-f2157c68ac06\") " pod="kube-system/storage-provisioner"
	Nov 26 20:22:43 default-k8s-diff-port-178152 kubelet[1319]: I1126 20:22:43.254559    1319 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-tpmmm" podStartSLOduration=12.254536535 podStartE2EDuration="12.254536535s" podCreationTimestamp="2025-11-26 20:22:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 20:22:43.254498792 +0000 UTC m=+18.146459719" watchObservedRunningTime="2025-11-26 20:22:43.254536535 +0000 UTC m=+18.146497462"
	Nov 26 20:22:43 default-k8s-diff-port-178152 kubelet[1319]: I1126 20:22:43.264688    1319 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.264665373 podStartE2EDuration="12.264665373s" podCreationTimestamp="2025-11-26 20:22:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 20:22:43.264605179 +0000 UTC m=+18.156566124" watchObservedRunningTime="2025-11-26 20:22:43.264665373 +0000 UTC m=+18.156626300"
	Nov 26 20:22:45 default-k8s-diff-port-178152 kubelet[1319]: I1126 20:22:45.322284    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwzxt\" (UniqueName: \"kubernetes.io/projected/784f93fd-b5f3-4353-977c-1c2395ef08b7-kube-api-access-fwzxt\") pod \"busybox\" (UID: \"784f93fd-b5f3-4353-977c-1c2395ef08b7\") " pod="default/busybox"
	
	
	==> storage-provisioner [13ebc2a72a6f089083d5905b8793cd9974f8e46307aae4fda792f244d4e7de43] <==
	I1126 20:22:42.607203       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1126 20:22:42.617822       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1126 20:22:42.617944       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1126 20:22:42.619977       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:22:42.625348       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1126 20:22:42.625629       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1126 20:22:42.625744       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0bea3325-c523-4ea4-89b9-0b2d778812eb", APIVersion:"v1", ResourceVersion:"406", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-178152_1b9483bb-3179-4115-a807-2f6e305c6db5 became leader
	I1126 20:22:42.625805       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-178152_1b9483bb-3179-4115-a807-2f6e305c6db5!
	W1126 20:22:42.627662       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:22:42.630657       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1126 20:22:42.726890       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-178152_1b9483bb-3179-4115-a807-2f6e305c6db5!
	W1126 20:22:44.635556       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:22:44.643622       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:22:46.649558       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:22:46.658033       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:22:48.662072       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:22:48.738265       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:22:50.742129       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:22:50.746628       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:22:52.750715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:22:52.755118       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-178152 -n default-k8s-diff-port-178152
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-178152 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.45s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (6.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-297942 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-297942 --alsologtostderr -v=1: exit status 80 (2.240672907s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-297942 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1126 20:22:53.724707  287749 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:22:53.725026  287749 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:22:53.725038  287749 out.go:374] Setting ErrFile to fd 2...
	I1126 20:22:53.725044  287749 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:22:53.725378  287749 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-10722/.minikube/bin
	I1126 20:22:53.725690  287749 out.go:368] Setting JSON to false
	I1126 20:22:53.725711  287749 mustload.go:66] Loading cluster: newest-cni-297942
	I1126 20:22:53.726194  287749 config.go:182] Loaded profile config "newest-cni-297942": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:22:53.726792  287749 cli_runner.go:164] Run: docker container inspect newest-cni-297942 --format={{.State.Status}}
	I1126 20:22:53.747596  287749 host.go:66] Checking if "newest-cni-297942" exists ...
	I1126 20:22:53.747907  287749 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:22:53.817511  287749 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:87 SystemTime:2025-11-26 20:22:53.807194324 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1126 20:22:53.818188  287749 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-297942 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1126 20:22:53.820282  287749 out.go:179] * Pausing node newest-cni-297942 ... 
	I1126 20:22:53.821819  287749 host.go:66] Checking if "newest-cni-297942" exists ...
	I1126 20:22:53.822074  287749 ssh_runner.go:195] Run: systemctl --version
	I1126 20:22:53.822115  287749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-297942
	I1126 20:22:53.844165  287749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/newest-cni-297942/id_rsa Username:docker}
	I1126 20:22:53.949681  287749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:22:53.962180  287749 pause.go:52] kubelet running: true
	I1126 20:22:53.962234  287749 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1126 20:22:54.120039  287749 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1126 20:22:54.120128  287749 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1126 20:22:54.198610  287749 cri.go:89] found id: "79a3e043f4e7140d7b0ca23ced001e62aaef6584f996e79c9766818d3dc88c0c"
	I1126 20:22:54.198634  287749 cri.go:89] found id: "0bde763b33e49c60ef0ea171446deb3af31ee6ffb05e6de302bf2264a5ab2874"
	I1126 20:22:54.198640  287749 cri.go:89] found id: "0b56ca8f427ee8a88caaf56897449390deb07473ded1db55b4a4435b4a244998"
	I1126 20:22:54.198645  287749 cri.go:89] found id: "c6b80fb157b1876098abb174101b5cbb41aab43e97ca6bbab5097ef0385c3b13"
	I1126 20:22:54.198649  287749 cri.go:89] found id: "db4659bb8554145bfacd475b919e39a9767e55ff868938d2d744b53d7838507e"
	I1126 20:22:54.198654  287749 cri.go:89] found id: "cc32d8959eb0604113131904bc81d9b470aeaecc3fe9ac30a6213641dcb226f1"
	I1126 20:22:54.198658  287749 cri.go:89] found id: ""
	I1126 20:22:54.198704  287749 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 20:22:54.212263  287749 retry.go:31] will retry after 196.449261ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:22:54Z" level=error msg="open /run/runc: no such file or directory"
	I1126 20:22:54.409753  287749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:22:54.427884  287749 pause.go:52] kubelet running: false
	I1126 20:22:54.427960  287749 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1126 20:22:54.568899  287749 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1126 20:22:54.568960  287749 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1126 20:22:54.638212  287749 cri.go:89] found id: "79a3e043f4e7140d7b0ca23ced001e62aaef6584f996e79c9766818d3dc88c0c"
	I1126 20:22:54.638233  287749 cri.go:89] found id: "0bde763b33e49c60ef0ea171446deb3af31ee6ffb05e6de302bf2264a5ab2874"
	I1126 20:22:54.638237  287749 cri.go:89] found id: "0b56ca8f427ee8a88caaf56897449390deb07473ded1db55b4a4435b4a244998"
	I1126 20:22:54.638240  287749 cri.go:89] found id: "c6b80fb157b1876098abb174101b5cbb41aab43e97ca6bbab5097ef0385c3b13"
	I1126 20:22:54.638243  287749 cri.go:89] found id: "db4659bb8554145bfacd475b919e39a9767e55ff868938d2d744b53d7838507e"
	I1126 20:22:54.638246  287749 cri.go:89] found id: "cc32d8959eb0604113131904bc81d9b470aeaecc3fe9ac30a6213641dcb226f1"
	I1126 20:22:54.638249  287749 cri.go:89] found id: ""
	I1126 20:22:54.638288  287749 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 20:22:54.650429  287749 retry.go:31] will retry after 197.561781ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:22:54Z" level=error msg="open /run/runc: no such file or directory"
	I1126 20:22:54.849388  287749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:22:54.863312  287749 pause.go:52] kubelet running: false
	I1126 20:22:54.863367  287749 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1126 20:22:55.013422  287749 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1126 20:22:55.013554  287749 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1126 20:22:55.091011  287749 cri.go:89] found id: "79a3e043f4e7140d7b0ca23ced001e62aaef6584f996e79c9766818d3dc88c0c"
	I1126 20:22:55.091030  287749 cri.go:89] found id: "0bde763b33e49c60ef0ea171446deb3af31ee6ffb05e6de302bf2264a5ab2874"
	I1126 20:22:55.091034  287749 cri.go:89] found id: "0b56ca8f427ee8a88caaf56897449390deb07473ded1db55b4a4435b4a244998"
	I1126 20:22:55.091038  287749 cri.go:89] found id: "c6b80fb157b1876098abb174101b5cbb41aab43e97ca6bbab5097ef0385c3b13"
	I1126 20:22:55.091041  287749 cri.go:89] found id: "db4659bb8554145bfacd475b919e39a9767e55ff868938d2d744b53d7838507e"
	I1126 20:22:55.091044  287749 cri.go:89] found id: "cc32d8959eb0604113131904bc81d9b470aeaecc3fe9ac30a6213641dcb226f1"
	I1126 20:22:55.091047  287749 cri.go:89] found id: ""
	I1126 20:22:55.091080  287749 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 20:22:55.104386  287749 retry.go:31] will retry after 501.407566ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:22:55Z" level=error msg="open /run/runc: no such file or directory"
	I1126 20:22:55.606081  287749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:22:55.621782  287749 pause.go:52] kubelet running: false
	I1126 20:22:55.621836  287749 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1126 20:22:55.784373  287749 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1126 20:22:55.784486  287749 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1126 20:22:55.866490  287749 cri.go:89] found id: "79a3e043f4e7140d7b0ca23ced001e62aaef6584f996e79c9766818d3dc88c0c"
	I1126 20:22:55.866518  287749 cri.go:89] found id: "0bde763b33e49c60ef0ea171446deb3af31ee6ffb05e6de302bf2264a5ab2874"
	I1126 20:22:55.866526  287749 cri.go:89] found id: "0b56ca8f427ee8a88caaf56897449390deb07473ded1db55b4a4435b4a244998"
	I1126 20:22:55.866531  287749 cri.go:89] found id: "c6b80fb157b1876098abb174101b5cbb41aab43e97ca6bbab5097ef0385c3b13"
	I1126 20:22:55.866535  287749 cri.go:89] found id: "db4659bb8554145bfacd475b919e39a9767e55ff868938d2d744b53d7838507e"
	I1126 20:22:55.866540  287749 cri.go:89] found id: "cc32d8959eb0604113131904bc81d9b470aeaecc3fe9ac30a6213641dcb226f1"
	I1126 20:22:55.866544  287749 cri.go:89] found id: ""
	I1126 20:22:55.866589  287749 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 20:22:55.883225  287749 out.go:203] 
	W1126 20:22:55.884372  287749 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:22:55Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:22:55Z" level=error msg="open /run/runc: no such file or directory"
	
	W1126 20:22:55.884389  287749 out.go:285] * 
	* 
	W1126 20:22:55.890391  287749 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1126 20:22:55.892482  287749 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-297942 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-297942
helpers_test.go:243: (dbg) docker inspect newest-cni-297942:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "40b9f3c5f1a3b0585255460117f18a9671b74c031d814a5253a12e48d3850584",
	        "Created": "2025-11-26T20:22:12.162948812Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 283328,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-26T20:22:41.158451192Z",
	            "FinishedAt": "2025-11-26T20:22:40.172025239Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/40b9f3c5f1a3b0585255460117f18a9671b74c031d814a5253a12e48d3850584/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/40b9f3c5f1a3b0585255460117f18a9671b74c031d814a5253a12e48d3850584/hostname",
	        "HostsPath": "/var/lib/docker/containers/40b9f3c5f1a3b0585255460117f18a9671b74c031d814a5253a12e48d3850584/hosts",
	        "LogPath": "/var/lib/docker/containers/40b9f3c5f1a3b0585255460117f18a9671b74c031d814a5253a12e48d3850584/40b9f3c5f1a3b0585255460117f18a9671b74c031d814a5253a12e48d3850584-json.log",
	        "Name": "/newest-cni-297942",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-297942:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-297942",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "40b9f3c5f1a3b0585255460117f18a9671b74c031d814a5253a12e48d3850584",
	                "LowerDir": "/var/lib/docker/overlay2/417273ddd8a42fbe19a864fe35ffc1bc9de0f153b520af6708ccf3f376e863fe-init/diff:/var/lib/docker/overlay2/1661d775281cb7314a654558ee2c4e5880ffafb3a27a085032449cd60a68753a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/417273ddd8a42fbe19a864fe35ffc1bc9de0f153b520af6708ccf3f376e863fe/merged",
	                "UpperDir": "/var/lib/docker/overlay2/417273ddd8a42fbe19a864fe35ffc1bc9de0f153b520af6708ccf3f376e863fe/diff",
	                "WorkDir": "/var/lib/docker/overlay2/417273ddd8a42fbe19a864fe35ffc1bc9de0f153b520af6708ccf3f376e863fe/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-297942",
	                "Source": "/var/lib/docker/volumes/newest-cni-297942/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-297942",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-297942",
	                "name.minikube.sigs.k8s.io": "newest-cni-297942",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "2f35fe2a7e03d4ae53c80197232da4df6f428bcc28c758f21a90153fda12f531",
	            "SandboxKey": "/var/run/docker/netns/2f35fe2a7e03",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-297942": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7a8acc179efb582b4f8ab1f8758542f842892d2dd2928aade1bbb97827e2c1af",
	                    "EndpointID": "1d80a818a66d5158851c96bfc37538fcb57e5fb123d59c70cd4517f824513591",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "aa:03:05:b0:ae:f3",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-297942",
	                        "40b9f3c5f1a3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-297942 -n newest-cni-297942
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-297942 -n newest-cni-297942: exit status 2 (380.626ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-297942 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-297942 logs -n 25: (1.221638977s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p cert-expiration-571738                                                                                                                                                                                                                     │ cert-expiration-571738       │ jenkins │ v1.37.0 │ 26 Nov 25 20:21 UTC │ 26 Nov 25 20:21 UTC │
	│ start   │ -p embed-certs-949294 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-949294           │ jenkins │ v1.37.0 │ 26 Nov 25 20:21 UTC │ 26 Nov 25 20:22 UTC │
	│ start   │ -p kubernetes-upgrade-225144 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-225144    │ jenkins │ v1.37.0 │ 26 Nov 25 20:21 UTC │                     │
	│ start   │ -p kubernetes-upgrade-225144 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-225144    │ jenkins │ v1.37.0 │ 26 Nov 25 20:21 UTC │ 26 Nov 25 20:22 UTC │
	│ delete  │ -p stopped-upgrade-211103                                                                                                                                                                                                                     │ stopped-upgrade-211103       │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ delete  │ -p kubernetes-upgrade-225144                                                                                                                                                                                                                  │ kubernetes-upgrade-225144    │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ delete  │ -p disable-driver-mounts-221304                                                                                                                                                                                                               │ disable-driver-mounts-221304 │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ start   │ -p default-k8s-diff-port-178152 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-178152 │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ start   │ -p newest-cni-297942 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-297942            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ addons  │ enable metrics-server -p no-preload-026579 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-026579            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │                     │
	│ stop    │ -p no-preload-026579 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-026579            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ addons  │ enable metrics-server -p embed-certs-949294 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-949294           │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │                     │
	│ stop    │ -p embed-certs-949294 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-949294           │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ addons  │ enable dashboard -p no-preload-026579 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-026579            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ start   │ -p no-preload-026579 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-026579            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-297942 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-297942            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-949294 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-949294           │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ start   │ -p embed-certs-949294 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-949294           │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │                     │
	│ stop    │ -p newest-cni-297942 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-297942            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ addons  │ enable dashboard -p newest-cni-297942 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-297942            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ start   │ -p newest-cni-297942 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-297942            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-178152 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-178152 │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │                     │
	│ image   │ newest-cni-297942 image list --format=json                                                                                                                                                                                                    │ newest-cni-297942            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ pause   │ -p newest-cni-297942 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-297942            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-178152 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-178152 │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/26 20:22:40
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1126 20:22:40.884483  283132 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:22:40.884766  283132 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:22:40.884776  283132 out.go:374] Setting ErrFile to fd 2...
	I1126 20:22:40.884785  283132 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:22:40.884987  283132 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-10722/.minikube/bin
	I1126 20:22:40.885440  283132 out.go:368] Setting JSON to false
	I1126 20:22:40.886566  283132 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3911,"bootTime":1764184650,"procs":316,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1126 20:22:40.886632  283132 start.go:143] virtualization: kvm guest
	I1126 20:22:40.888379  283132 out.go:179] * [newest-cni-297942] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1126 20:22:40.889473  283132 out.go:179]   - MINIKUBE_LOCATION=21974
	I1126 20:22:40.889503  283132 notify.go:221] Checking for updates...
	I1126 20:22:40.892833  283132 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1126 20:22:40.894800  283132 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21974-10722/kubeconfig
	I1126 20:22:40.896376  283132 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-10722/.minikube
	I1126 20:22:40.897743  283132 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1126 20:22:40.898713  283132 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1126 20:22:40.900231  283132 config.go:182] Loaded profile config "newest-cni-297942": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:22:40.900958  283132 driver.go:422] Setting default libvirt URI to qemu:///system
	I1126 20:22:40.928114  283132 docker.go:124] docker version: linux-29.0.4:Docker Engine - Community
	I1126 20:22:40.928202  283132 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:22:41.015656  283132 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:74 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-26 20:22:41.003539781 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1126 20:22:41.015804  283132 docker.go:319] overlay module found
	I1126 20:22:41.016948  283132 out.go:179] * Using the docker driver based on existing profile
	I1126 20:22:41.017883  283132 start.go:309] selected driver: docker
	I1126 20:22:41.017898  283132 start.go:927] validating driver "docker" against &{Name:newest-cni-297942 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-297942 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:22:41.018002  283132 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1126 20:22:41.018724  283132 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:22:41.084121  283132 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:74 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-26 20:22:41.072667777 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1126 20:22:41.084507  283132 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1126 20:22:41.084546  283132 cni.go:84] Creating CNI manager for ""
	I1126 20:22:41.084623  283132 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:22:41.084677  283132 start.go:353] cluster config:
	{Name:newest-cni-297942 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-297942 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:22:41.086652  283132 out.go:179] * Starting "newest-cni-297942" primary control-plane node in "newest-cni-297942" cluster
	I1126 20:22:41.087583  283132 cache.go:134] Beginning downloading kic base image for docker with crio
	I1126 20:22:41.088592  283132 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1126 20:22:41.089520  283132 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:22:41.089554  283132 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21974-10722/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1126 20:22:41.089569  283132 cache.go:65] Caching tarball of preloaded images
	I1126 20:22:41.089623  283132 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1126 20:22:41.089678  283132 preload.go:238] Found /home/jenkins/minikube-integration/21974-10722/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1126 20:22:41.089692  283132 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1126 20:22:41.089796  283132 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/newest-cni-297942/config.json ...
	I1126 20:22:41.111178  283132 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1126 20:22:41.111197  283132 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1126 20:22:41.111211  283132 cache.go:243] Successfully downloaded all kic artifacts
	I1126 20:22:41.111242  283132 start.go:360] acquireMachinesLock for newest-cni-297942: {Name:mkec4aea2213ece57272965b7ad56143d17ef93a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1126 20:22:41.111305  283132 start.go:364] duration metric: took 40.156µs to acquireMachinesLock for "newest-cni-297942"
	I1126 20:22:41.111323  283132 start.go:96] Skipping create...Using existing machine configuration
	I1126 20:22:41.111333  283132 fix.go:54] fixHost starting: 
	I1126 20:22:41.111591  283132 cli_runner.go:164] Run: docker container inspect newest-cni-297942 --format={{.State.Status}}
	I1126 20:22:41.129559  283132 fix.go:112] recreateIfNeeded on newest-cni-297942: state=Stopped err=<nil>
	W1126 20:22:41.129580  283132 fix.go:138] unexpected machine state, will restart: <nil>
	I1126 20:22:39.153389  279050 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:22:39.153408  279050 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1126 20:22:39.153478  279050 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-026579
	I1126 20:22:39.179591  279050 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/no-preload-026579/id_rsa Username:docker}
	I1126 20:22:39.180647  279050 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1126 20:22:39.180665  279050 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1126 20:22:39.180721  279050 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-026579
	I1126 20:22:39.186501  279050 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/no-preload-026579/id_rsa Username:docker}
	I1126 20:22:39.209307  279050 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/no-preload-026579/id_rsa Username:docker}
	I1126 20:22:39.273960  279050 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:22:39.287389  279050 node_ready.go:35] waiting up to 6m0s for node "no-preload-026579" to be "Ready" ...
	I1126 20:22:39.298799  279050 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:22:39.300737  279050 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1126 20:22:39.300753  279050 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1126 20:22:39.315410  279050 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1126 20:22:39.315430  279050 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1126 20:22:39.325338  279050 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1126 20:22:39.331515  279050 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1126 20:22:39.331534  279050 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1126 20:22:39.348174  279050 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1126 20:22:39.348194  279050 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1126 20:22:39.369916  279050 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1126 20:22:39.369951  279050 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1126 20:22:39.385646  279050 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1126 20:22:39.385669  279050 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1126 20:22:39.400976  279050 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1126 20:22:39.401000  279050 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1126 20:22:39.416714  279050 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1126 20:22:39.416732  279050 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1126 20:22:39.433038  279050 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1126 20:22:39.433061  279050 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1126 20:22:39.447449  279050 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1126 20:22:40.587082  279050 node_ready.go:49] node "no-preload-026579" is "Ready"
	I1126 20:22:40.587113  279050 node_ready.go:38] duration metric: took 1.299680318s for node "no-preload-026579" to be "Ready" ...
	I1126 20:22:40.587129  279050 api_server.go:52] waiting for apiserver process to appear ...
	I1126 20:22:40.587180  279050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:22:41.152424  279050 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.853595012s)
	I1126 20:22:41.152577  279050 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.827200281s)
	I1126 20:22:41.152686  279050 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.705180849s)
	I1126 20:22:41.152711  279050 api_server.go:72] duration metric: took 2.029918005s to wait for apiserver process to appear ...
	I1126 20:22:41.152721  279050 api_server.go:88] waiting for apiserver healthz status ...
	I1126 20:22:41.152742  279050 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1126 20:22:41.156567  279050 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-026579 addons enable metrics-server
	
	I1126 20:22:41.157768  279050 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1126 20:22:41.157789  279050 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1126 20:22:41.157819  279050 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1126 20:22:41.159540  279050 addons.go:530] duration metric: took 2.036715336s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1126 20:22:41.653489  279050 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1126 20:22:41.658910  279050 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1126 20:22:41.658967  279050 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1126 20:22:37.813684  281230 out.go:252] * Restarting existing docker container for "embed-certs-949294" ...
	I1126 20:22:37.813768  281230 cli_runner.go:164] Run: docker start embed-certs-949294
	I1126 20:22:38.131293  281230 cli_runner.go:164] Run: docker container inspect embed-certs-949294 --format={{.State.Status}}
	I1126 20:22:38.152794  281230 kic.go:430] container "embed-certs-949294" state is running.
	I1126 20:22:38.153224  281230 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-949294
	I1126 20:22:38.175166  281230 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/embed-certs-949294/config.json ...
	I1126 20:22:38.175388  281230 machine.go:94] provisionDockerMachine start ...
	I1126 20:22:38.175448  281230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-949294
	I1126 20:22:38.196588  281230 main.go:143] libmachine: Using SSH client type: native
	I1126 20:22:38.196809  281230 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1126 20:22:38.196819  281230 main.go:143] libmachine: About to run SSH command:
	hostname
	I1126 20:22:38.197513  281230 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60668->127.0.0.1:33088: read: connection reset by peer
	I1126 20:22:41.353546  281230 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-949294
	
	I1126 20:22:41.353574  281230 ubuntu.go:182] provisioning hostname "embed-certs-949294"
	I1126 20:22:41.353632  281230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-949294
	I1126 20:22:41.371710  281230 main.go:143] libmachine: Using SSH client type: native
	I1126 20:22:41.371940  281230 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1126 20:22:41.371965  281230 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-949294 && echo "embed-certs-949294" | sudo tee /etc/hostname
	I1126 20:22:41.527011  281230 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-949294
	
	I1126 20:22:41.527082  281230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-949294
	I1126 20:22:41.552128  281230 main.go:143] libmachine: Using SSH client type: native
	I1126 20:22:41.552497  281230 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1126 20:22:41.552529  281230 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-949294' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-949294/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-949294' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1126 20:22:41.706552  281230 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1126 20:22:41.706582  281230 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21974-10722/.minikube CaCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21974-10722/.minikube}
	I1126 20:22:41.706605  281230 ubuntu.go:190] setting up certificates
	I1126 20:22:41.706617  281230 provision.go:84] configureAuth start
	I1126 20:22:41.706674  281230 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-949294
	I1126 20:22:41.731291  281230 provision.go:143] copyHostCerts
	I1126 20:22:41.731358  281230 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-10722/.minikube/key.pem, removing ...
	I1126 20:22:41.731373  281230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-10722/.minikube/key.pem
	I1126 20:22:41.731452  281230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/key.pem (1675 bytes)
	I1126 20:22:41.731672  281230 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-10722/.minikube/ca.pem, removing ...
	I1126 20:22:41.731683  281230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-10722/.minikube/ca.pem
	I1126 20:22:41.731717  281230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/ca.pem (1078 bytes)
	I1126 20:22:41.731789  281230 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-10722/.minikube/cert.pem, removing ...
	I1126 20:22:41.731798  281230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-10722/.minikube/cert.pem
	I1126 20:22:41.731833  281230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/cert.pem (1123 bytes)
	I1126 20:22:41.731947  281230 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem org=jenkins.embed-certs-949294 san=[127.0.0.1 192.168.94.2 embed-certs-949294 localhost minikube]
	I1126 20:22:41.778215  281230 provision.go:177] copyRemoteCerts
	I1126 20:22:41.778266  281230 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1126 20:22:41.778295  281230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-949294
	I1126 20:22:41.797553  281230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/embed-certs-949294/id_rsa Username:docker}
	I1126 20:22:41.908508  281230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1126 20:22:41.927584  281230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1126 20:22:41.944361  281230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1126 20:22:41.960987  281230 provision.go:87] duration metric: took 254.359611ms to configureAuth
	I1126 20:22:41.961014  281230 ubuntu.go:206] setting minikube options for container-runtime
	I1126 20:22:41.961161  281230 config.go:182] Loaded profile config "embed-certs-949294": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:22:41.961244  281230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-949294
	I1126 20:22:41.979703  281230 main.go:143] libmachine: Using SSH client type: native
	I1126 20:22:41.980006  281230 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1126 20:22:41.980032  281230 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1126 20:22:42.318188  281230 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1126 20:22:42.318211  281230 machine.go:97] duration metric: took 4.142808387s to provisionDockerMachine
	I1126 20:22:42.318225  281230 start.go:293] postStartSetup for "embed-certs-949294" (driver="docker")
	I1126 20:22:42.318237  281230 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1126 20:22:42.318297  281230 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1126 20:22:42.318364  281230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-949294
	I1126 20:22:42.338327  281230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/embed-certs-949294/id_rsa Username:docker}
	I1126 20:22:42.438215  281230 ssh_runner.go:195] Run: cat /etc/os-release
	I1126 20:22:42.441404  281230 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1126 20:22:42.441434  281230 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1126 20:22:42.441446  281230 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-10722/.minikube/addons for local assets ...
	I1126 20:22:42.441539  281230 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-10722/.minikube/files for local assets ...
	I1126 20:22:42.441610  281230 filesync.go:149] local asset: /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem -> 142582.pem in /etc/ssl/certs
	I1126 20:22:42.441700  281230 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1126 20:22:42.448842  281230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem --> /etc/ssl/certs/142582.pem (1708 bytes)
	I1126 20:22:42.465662  281230 start.go:296] duration metric: took 147.425996ms for postStartSetup
	I1126 20:22:42.465729  281230 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1126 20:22:42.465774  281230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-949294
	I1126 20:22:42.483672  281230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/embed-certs-949294/id_rsa Username:docker}
	I1126 20:22:42.582571  281230 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1126 20:22:42.589248  281230 fix.go:56] duration metric: took 4.801612317s for fixHost
	I1126 20:22:42.589282  281230 start.go:83] releasing machines lock for "embed-certs-949294", held for 4.801666542s
	I1126 20:22:42.589356  281230 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-949294
	I1126 20:22:42.613599  281230 ssh_runner.go:195] Run: cat /version.json
	I1126 20:22:42.613635  281230 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1126 20:22:42.613653  281230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-949294
	I1126 20:22:42.613694  281230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-949294
	I1126 20:22:42.640998  281230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/embed-certs-949294/id_rsa Username:docker}
	I1126 20:22:42.641470  281230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/embed-certs-949294/id_rsa Username:docker}
	I1126 20:22:42.742494  281230 ssh_runner.go:195] Run: systemctl --version
	I1126 20:22:42.794845  281230 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1126 20:22:42.828506  281230 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1126 20:22:42.833001  281230 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1126 20:22:42.833081  281230 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1126 20:22:42.840611  281230 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1126 20:22:42.840633  281230 start.go:496] detecting cgroup driver to use...
	I1126 20:22:42.840662  281230 detect.go:190] detected "systemd" cgroup driver on host os
	I1126 20:22:42.840704  281230 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1126 20:22:42.854304  281230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1126 20:22:42.865621  281230 docker.go:218] disabling cri-docker service (if available) ...
	I1126 20:22:42.865663  281230 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1126 20:22:42.879121  281230 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1126 20:22:42.890217  281230 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1126 20:22:42.972124  281230 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1126 20:22:43.054010  281230 docker.go:234] disabling docker service ...
	I1126 20:22:43.054076  281230 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1126 20:22:43.067236  281230 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1126 20:22:43.079079  281230 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1126 20:22:43.158407  281230 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1126 20:22:43.236403  281230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1126 20:22:43.249898  281230 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1126 20:22:43.266098  281230 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1126 20:22:43.266169  281230 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:22:43.275593  281230 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1126 20:22:43.275650  281230 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:22:43.286305  281230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:22:43.295428  281230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:22:43.304196  281230 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1126 20:22:43.312078  281230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:22:43.320105  281230 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:22:43.328187  281230 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:22:43.336849  281230 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1126 20:22:43.344213  281230 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1126 20:22:43.351591  281230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:22:43.434081  281230 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1126 20:22:43.584410  281230 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1126 20:22:43.584499  281230 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1126 20:22:43.588269  281230 start.go:564] Will wait 60s for crictl version
	I1126 20:22:43.588336  281230 ssh_runner.go:195] Run: which crictl
	I1126 20:22:43.591767  281230 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1126 20:22:43.614952  281230 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1126 20:22:43.615025  281230 ssh_runner.go:195] Run: crio --version
	I1126 20:22:43.641356  281230 ssh_runner.go:195] Run: crio --version
	I1126 20:22:43.667903  281230 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	W1126 20:22:39.746749  271308 node_ready.go:57] node "default-k8s-diff-port-178152" has "Ready":"False" status (will retry)
	I1126 20:22:42.249658  271308 node_ready.go:49] node "default-k8s-diff-port-178152" is "Ready"
	I1126 20:22:42.250190  271308 node_ready.go:38] duration metric: took 11.006799541s for node "default-k8s-diff-port-178152" to be "Ready" ...
	I1126 20:22:42.250224  271308 api_server.go:52] waiting for apiserver process to appear ...
	I1126 20:22:42.250294  271308 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:22:42.272956  271308 api_server.go:72] duration metric: took 11.381347219s to wait for apiserver process to appear ...
	I1126 20:22:42.272984  271308 api_server.go:88] waiting for apiserver healthz status ...
	I1126 20:22:42.273006  271308 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1126 20:22:42.279175  271308 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1126 20:22:42.280247  271308 api_server.go:141] control plane version: v1.34.1
	I1126 20:22:42.280267  271308 api_server.go:131] duration metric: took 7.276294ms to wait for apiserver health ...
	I1126 20:22:42.280275  271308 system_pods.go:43] waiting for kube-system pods to appear ...
	I1126 20:22:42.283222  271308 system_pods.go:59] 8 kube-system pods found
	I1126 20:22:42.283253  271308 system_pods.go:61] "coredns-66bc5c9577-tpmmm" [20166f90-76ba-4092-aab9-29683f4fc146] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:22:42.283261  271308 system_pods.go:61] "etcd-default-k8s-diff-port-178152" [b1b7fbf2-3a62-4441-9f2a-1fc512882320] Running
	I1126 20:22:42.283266  271308 system_pods.go:61] "kindnet-bmzz2" [ad4ae092-70cd-48b0-9099-854ccce3329d] Running
	I1126 20:22:42.283269  271308 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-178152" [57a5fbc3-1a74-4cd9-af68-a9cb4053ee40] Running
	I1126 20:22:42.283273  271308 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-178152" [41dc1851-5cc2-414e-8ef8-b25e098649c0] Running
	I1126 20:22:42.283280  271308 system_pods.go:61] "kube-proxy-vd7fp" [37371ddf-6fde-4f46-a877-97f4112ff1b2] Running
	I1126 20:22:42.283283  271308 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-178152" [f1a6a26d-5146-4ffc-9bb3-20349686988f] Running
	I1126 20:22:42.283288  271308 system_pods.go:61] "storage-provisioner" [0ed42547-a316-4970-b4e7-f2157c68ac06] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 20:22:42.283293  271308 system_pods.go:74] duration metric: took 3.013459ms to wait for pod list to return data ...
	I1126 20:22:42.283303  271308 default_sa.go:34] waiting for default service account to be created ...
	I1126 20:22:42.285341  271308 default_sa.go:45] found service account: "default"
	I1126 20:22:42.285361  271308 default_sa.go:55] duration metric: took 2.052746ms for default service account to be created ...
	I1126 20:22:42.285368  271308 system_pods.go:116] waiting for k8s-apps to be running ...
	I1126 20:22:42.287817  271308 system_pods.go:86] 8 kube-system pods found
	I1126 20:22:42.287844  271308 system_pods.go:89] "coredns-66bc5c9577-tpmmm" [20166f90-76ba-4092-aab9-29683f4fc146] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:22:42.287851  271308 system_pods.go:89] "etcd-default-k8s-diff-port-178152" [b1b7fbf2-3a62-4441-9f2a-1fc512882320] Running
	I1126 20:22:42.287871  271308 system_pods.go:89] "kindnet-bmzz2" [ad4ae092-70cd-48b0-9099-854ccce3329d] Running
	I1126 20:22:42.287878  271308 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-178152" [57a5fbc3-1a74-4cd9-af68-a9cb4053ee40] Running
	I1126 20:22:42.287906  271308 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-178152" [41dc1851-5cc2-414e-8ef8-b25e098649c0] Running
	I1126 20:22:42.287912  271308 system_pods.go:89] "kube-proxy-vd7fp" [37371ddf-6fde-4f46-a877-97f4112ff1b2] Running
	I1126 20:22:42.287918  271308 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-178152" [f1a6a26d-5146-4ffc-9bb3-20349686988f] Running
	I1126 20:22:42.287927  271308 system_pods.go:89] "storage-provisioner" [0ed42547-a316-4970-b4e7-f2157c68ac06] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 20:22:42.287958  271308 retry.go:31] will retry after 308.61666ms: missing components: kube-dns
	I1126 20:22:42.602933  271308 system_pods.go:86] 8 kube-system pods found
	I1126 20:22:42.602960  271308 system_pods.go:89] "coredns-66bc5c9577-tpmmm" [20166f90-76ba-4092-aab9-29683f4fc146] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:22:42.602966  271308 system_pods.go:89] "etcd-default-k8s-diff-port-178152" [b1b7fbf2-3a62-4441-9f2a-1fc512882320] Running
	I1126 20:22:42.602971  271308 system_pods.go:89] "kindnet-bmzz2" [ad4ae092-70cd-48b0-9099-854ccce3329d] Running
	I1126 20:22:42.602975  271308 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-178152" [57a5fbc3-1a74-4cd9-af68-a9cb4053ee40] Running
	I1126 20:22:42.602979  271308 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-178152" [41dc1851-5cc2-414e-8ef8-b25e098649c0] Running
	I1126 20:22:42.602982  271308 system_pods.go:89] "kube-proxy-vd7fp" [37371ddf-6fde-4f46-a877-97f4112ff1b2] Running
	I1126 20:22:42.602985  271308 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-178152" [f1a6a26d-5146-4ffc-9bb3-20349686988f] Running
	I1126 20:22:42.602989  271308 system_pods.go:89] "storage-provisioner" [0ed42547-a316-4970-b4e7-f2157c68ac06] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 20:22:42.603002  271308 retry.go:31] will retry after 352.870646ms: missing components: kube-dns
	I1126 20:22:42.960487  271308 system_pods.go:86] 8 kube-system pods found
	I1126 20:22:42.960513  271308 system_pods.go:89] "coredns-66bc5c9577-tpmmm" [20166f90-76ba-4092-aab9-29683f4fc146] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:22:42.960519  271308 system_pods.go:89] "etcd-default-k8s-diff-port-178152" [b1b7fbf2-3a62-4441-9f2a-1fc512882320] Running
	I1126 20:22:42.960525  271308 system_pods.go:89] "kindnet-bmzz2" [ad4ae092-70cd-48b0-9099-854ccce3329d] Running
	I1126 20:22:42.960532  271308 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-178152" [57a5fbc3-1a74-4cd9-af68-a9cb4053ee40] Running
	I1126 20:22:42.960536  271308 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-178152" [41dc1851-5cc2-414e-8ef8-b25e098649c0] Running
	I1126 20:22:42.960545  271308 system_pods.go:89] "kube-proxy-vd7fp" [37371ddf-6fde-4f46-a877-97f4112ff1b2] Running
	I1126 20:22:42.960550  271308 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-178152" [f1a6a26d-5146-4ffc-9bb3-20349686988f] Running
	I1126 20:22:42.960554  271308 system_pods.go:89] "storage-provisioner" [0ed42547-a316-4970-b4e7-f2157c68ac06] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 20:22:42.960567  271308 retry.go:31] will retry after 370.669224ms: missing components: kube-dns
	I1126 20:22:43.336323  271308 system_pods.go:86] 8 kube-system pods found
	I1126 20:22:43.336368  271308 system_pods.go:89] "coredns-66bc5c9577-tpmmm" [20166f90-76ba-4092-aab9-29683f4fc146] Running
	I1126 20:22:43.336377  271308 system_pods.go:89] "etcd-default-k8s-diff-port-178152" [b1b7fbf2-3a62-4441-9f2a-1fc512882320] Running
	I1126 20:22:43.336384  271308 system_pods.go:89] "kindnet-bmzz2" [ad4ae092-70cd-48b0-9099-854ccce3329d] Running
	I1126 20:22:43.336390  271308 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-178152" [57a5fbc3-1a74-4cd9-af68-a9cb4053ee40] Running
	I1126 20:22:43.336401  271308 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-178152" [41dc1851-5cc2-414e-8ef8-b25e098649c0] Running
	I1126 20:22:43.336406  271308 system_pods.go:89] "kube-proxy-vd7fp" [37371ddf-6fde-4f46-a877-97f4112ff1b2] Running
	I1126 20:22:43.336412  271308 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-178152" [f1a6a26d-5146-4ffc-9bb3-20349686988f] Running
	I1126 20:22:43.336420  271308 system_pods.go:89] "storage-provisioner" [0ed42547-a316-4970-b4e7-f2157c68ac06] Running
	I1126 20:22:43.336429  271308 system_pods.go:126] duration metric: took 1.051054713s to wait for k8s-apps to be running ...
	I1126 20:22:43.336442  271308 system_svc.go:44] waiting for kubelet service to be running ....
	I1126 20:22:43.336492  271308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:22:43.349377  271308 system_svc.go:56] duration metric: took 12.93002ms WaitForService to wait for kubelet
	I1126 20:22:43.349397  271308 kubeadm.go:587] duration metric: took 12.457793394s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1126 20:22:43.349410  271308 node_conditions.go:102] verifying NodePressure condition ...
	I1126 20:22:43.352231  271308 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1126 20:22:43.352254  271308 node_conditions.go:123] node cpu capacity is 8
	I1126 20:22:43.352270  271308 node_conditions.go:105] duration metric: took 2.855748ms to run NodePressure ...
	I1126 20:22:43.352281  271308 start.go:242] waiting for startup goroutines ...
	I1126 20:22:43.352290  271308 start.go:247] waiting for cluster config update ...
	I1126 20:22:43.352299  271308 start.go:256] writing updated cluster config ...
	I1126 20:22:43.352549  271308 ssh_runner.go:195] Run: rm -f paused
	I1126 20:22:43.356029  271308 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 20:22:43.359306  271308 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-tpmmm" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:22:43.363412  271308 pod_ready.go:94] pod "coredns-66bc5c9577-tpmmm" is "Ready"
	I1126 20:22:43.363435  271308 pod_ready.go:86] duration metric: took 4.112055ms for pod "coredns-66bc5c9577-tpmmm" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:22:43.365248  271308 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-178152" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:22:43.368843  271308 pod_ready.go:94] pod "etcd-default-k8s-diff-port-178152" is "Ready"
	I1126 20:22:43.368862  271308 pod_ready.go:86] duration metric: took 3.598035ms for pod "etcd-default-k8s-diff-port-178152" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:22:43.370559  271308 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-178152" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:22:43.373917  271308 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-178152" is "Ready"
	I1126 20:22:43.373937  271308 pod_ready.go:86] duration metric: took 3.359149ms for pod "kube-apiserver-default-k8s-diff-port-178152" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:22:43.375639  271308 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-178152" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:22:43.760756  271308 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-178152" is "Ready"
	I1126 20:22:43.760788  271308 pod_ready.go:86] duration metric: took 385.124259ms for pod "kube-controller-manager-default-k8s-diff-port-178152" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:22:43.960061  271308 pod_ready.go:83] waiting for pod "kube-proxy-vd7fp" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:22:44.359897  271308 pod_ready.go:94] pod "kube-proxy-vd7fp" is "Ready"
	I1126 20:22:44.359924  271308 pod_ready.go:86] duration metric: took 399.838276ms for pod "kube-proxy-vd7fp" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:22:44.560435  271308 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-178152" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:22:43.668973  281230 cli_runner.go:164] Run: docker network inspect embed-certs-949294 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1126 20:22:43.686898  281230 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1126 20:22:43.690943  281230 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:22:43.701122  281230 kubeadm.go:884] updating cluster {Name:embed-certs-949294 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-949294 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1126 20:22:43.701233  281230 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:22:43.701286  281230 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:22:43.733576  281230 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:22:43.733598  281230 crio.go:433] Images already preloaded, skipping extraction
	I1126 20:22:43.733638  281230 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:22:43.757784  281230 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:22:43.757801  281230 cache_images.go:86] Images are preloaded, skipping loading
	I1126 20:22:43.757809  281230 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1126 20:22:43.757903  281230 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-949294 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-949294 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1126 20:22:43.757958  281230 ssh_runner.go:195] Run: crio config
	I1126 20:22:43.801014  281230 cni.go:84] Creating CNI manager for ""
	I1126 20:22:43.801042  281230 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:22:43.801062  281230 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1126 20:22:43.801091  281230 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-949294 NodeName:embed-certs-949294 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1126 20:22:43.801281  281230 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-949294"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1126 20:22:43.801354  281230 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1126 20:22:43.809139  281230 binaries.go:51] Found k8s binaries, skipping transfer
	I1126 20:22:43.809185  281230 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1126 20:22:43.816443  281230 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1126 20:22:43.828647  281230 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1126 20:22:43.842109  281230 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1126 20:22:43.853618  281230 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1126 20:22:43.856940  281230 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:22:43.866100  281230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:22:43.974877  281230 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:22:43.999946  281230 certs.go:69] Setting up /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/embed-certs-949294 for IP: 192.168.94.2
	I1126 20:22:43.999968  281230 certs.go:195] generating shared ca certs ...
	I1126 20:22:43.999990  281230 certs.go:227] acquiring lock for ca certs: {Name:mk4795cc985d88cb969cdd6a9c35d3c72f02dfea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:22:44.000162  281230 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21974-10722/.minikube/ca.key
	I1126 20:22:44.000228  281230 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.key
	I1126 20:22:44.000242  281230 certs.go:257] generating profile certs ...
	I1126 20:22:44.000348  281230 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/embed-certs-949294/client.key
	I1126 20:22:44.000422  281230 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/embed-certs-949294/apiserver.key.5bee8ac0
	I1126 20:22:44.000502  281230 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/embed-certs-949294/proxy-client.key
	I1126 20:22:44.000653  281230 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/14258.pem (1338 bytes)
	W1126 20:22:44.000697  281230 certs.go:480] ignoring /home/jenkins/minikube-integration/21974-10722/.minikube/certs/14258_empty.pem, impossibly tiny 0 bytes
	I1126 20:22:44.000711  281230 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem (1679 bytes)
	I1126 20:22:44.000754  281230 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem (1078 bytes)
	I1126 20:22:44.000799  281230 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem (1123 bytes)
	I1126 20:22:44.000834  281230 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem (1675 bytes)
	I1126 20:22:44.000897  281230 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem (1708 bytes)
	I1126 20:22:44.001493  281230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1126 20:22:44.019892  281230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1126 20:22:44.040066  281230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1126 20:22:44.057726  281230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1126 20:22:44.081328  281230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/embed-certs-949294/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1126 20:22:44.098058  281230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/embed-certs-949294/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1126 20:22:44.113934  281230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/embed-certs-949294/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1126 20:22:44.129831  281230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/embed-certs-949294/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1126 20:22:44.145588  281230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem --> /usr/share/ca-certificates/142582.pem (1708 bytes)
	I1126 20:22:44.161958  281230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1126 20:22:44.178404  281230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/certs/14258.pem --> /usr/share/ca-certificates/14258.pem (1338 bytes)
	I1126 20:22:44.195831  281230 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1126 20:22:44.207337  281230 ssh_runner.go:195] Run: openssl version
	I1126 20:22:44.213097  281230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142582.pem && ln -fs /usr/share/ca-certificates/142582.pem /etc/ssl/certs/142582.pem"
	I1126 20:22:44.220687  281230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142582.pem
	I1126 20:22:44.224116  281230 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 26 19:41 /usr/share/ca-certificates/142582.pem
	I1126 20:22:44.224164  281230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142582.pem
	I1126 20:22:44.258977  281230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142582.pem /etc/ssl/certs/3ec20f2e.0"
	I1126 20:22:44.267014  281230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1126 20:22:44.275688  281230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:22:44.279299  281230 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 26 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:22:44.279349  281230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:22:44.314548  281230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1126 20:22:44.322323  281230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14258.pem && ln -fs /usr/share/ca-certificates/14258.pem /etc/ssl/certs/14258.pem"
	I1126 20:22:44.331309  281230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14258.pem
	I1126 20:22:44.334747  281230 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 26 19:41 /usr/share/ca-certificates/14258.pem
	I1126 20:22:44.334792  281230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14258.pem
	I1126 20:22:44.369194  281230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14258.pem /etc/ssl/certs/51391683.0"
	I1126 20:22:44.377304  281230 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1126 20:22:44.381220  281230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1126 20:22:44.417889  281230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1126 20:22:44.454503  281230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1126 20:22:44.491150  281230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1126 20:22:44.542762  281230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1126 20:22:44.589987  281230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1126 20:22:44.653209  281230 kubeadm.go:401] StartCluster: {Name:embed-certs-949294 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-949294 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:22:44.653317  281230 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 20:22:44.653402  281230 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 20:22:44.698166  281230 cri.go:89] found id: "d0edb51e9e5fccff0dc7134b628a507303eb3f0ea693960b2ef07c819ccfcfb3"
	I1126 20:22:44.698189  281230 cri.go:89] found id: "1247796ab1281fb5ee5525e146ff9c03a80b5e36662073b88f7b4335b21630fa"
	I1126 20:22:44.698194  281230 cri.go:89] found id: "f265ea81a09610c82177779173f228ed110d405daaad945e9224abda5afc655e"
	I1126 20:22:44.698199  281230 cri.go:89] found id: "27f1868f6da521ee9bf27fc5eccba3b561b07052f526f759dfbebcc382b682e0"
	I1126 20:22:44.698204  281230 cri.go:89] found id: ""
	I1126 20:22:44.698249  281230 ssh_runner.go:195] Run: sudo runc list -f json
	W1126 20:22:44.712857  281230 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:22:44Z" level=error msg="open /run/runc: no such file or directory"
	I1126 20:22:44.712953  281230 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1126 20:22:44.721110  281230 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1126 20:22:44.721122  281230 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1126 20:22:44.721219  281230 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1126 20:22:44.728115  281230 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1126 20:22:44.728769  281230 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-949294" does not appear in /home/jenkins/minikube-integration/21974-10722/kubeconfig
	I1126 20:22:44.729067  281230 kubeconfig.go:62] /home/jenkins/minikube-integration/21974-10722/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-949294" cluster setting kubeconfig missing "embed-certs-949294" context setting]
	I1126 20:22:44.729783  281230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/kubeconfig: {Name:mk6fc897a3190afd853d8f239c80394df10dbd39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:22:44.731342  281230 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1126 20:22:44.739276  281230 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1126 20:22:44.739300  281230 kubeadm.go:602] duration metric: took 18.174206ms to restartPrimaryControlPlane
	I1126 20:22:44.739307  281230 kubeadm.go:403] duration metric: took 86.108546ms to StartCluster
	I1126 20:22:44.739318  281230 settings.go:142] acquiring lock: {Name:mkeab3f1dbcfb88a16fda32bff13c520a9a811ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:22:44.739377  281230 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21974-10722/kubeconfig
	I1126 20:22:44.740675  281230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/kubeconfig: {Name:mk6fc897a3190afd853d8f239c80394df10dbd39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:22:44.740856  281230 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1126 20:22:44.741084  281230 config.go:182] Loaded profile config "embed-certs-949294": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:22:44.741124  281230 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1126 20:22:44.741179  281230 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-949294"
	I1126 20:22:44.741193  281230 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-949294"
	W1126 20:22:44.741198  281230 addons.go:248] addon storage-provisioner should already be in state true
	I1126 20:22:44.741214  281230 host.go:66] Checking if "embed-certs-949294" exists ...
	I1126 20:22:44.741554  281230 cli_runner.go:164] Run: docker container inspect embed-certs-949294 --format={{.State.Status}}
	I1126 20:22:44.741639  281230 addons.go:70] Setting dashboard=true in profile "embed-certs-949294"
	I1126 20:22:44.741651  281230 addons.go:70] Setting default-storageclass=true in profile "embed-certs-949294"
	I1126 20:22:44.741668  281230 addons.go:239] Setting addon dashboard=true in "embed-certs-949294"
	I1126 20:22:44.741669  281230 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-949294"
	W1126 20:22:44.741678  281230 addons.go:248] addon dashboard should already be in state true
	I1126 20:22:44.741729  281230 host.go:66] Checking if "embed-certs-949294" exists ...
	I1126 20:22:44.741928  281230 cli_runner.go:164] Run: docker container inspect embed-certs-949294 --format={{.State.Status}}
	I1126 20:22:44.742228  281230 cli_runner.go:164] Run: docker container inspect embed-certs-949294 --format={{.State.Status}}
	I1126 20:22:44.742329  281230 out.go:179] * Verifying Kubernetes components...
	I1126 20:22:44.745728  281230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:22:44.769720  281230 addons.go:239] Setting addon default-storageclass=true in "embed-certs-949294"
	W1126 20:22:44.769745  281230 addons.go:248] addon default-storageclass should already be in state true
	I1126 20:22:44.769776  281230 host.go:66] Checking if "embed-certs-949294" exists ...
	I1126 20:22:44.770229  281230 cli_runner.go:164] Run: docker container inspect embed-certs-949294 --format={{.State.Status}}
	I1126 20:22:44.770534  281230 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1126 20:22:44.771603  281230 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1126 20:22:44.771655  281230 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:22:44.771665  281230 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1126 20:22:44.771726  281230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-949294
	I1126 20:22:44.773363  281230 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1126 20:22:44.961735  271308 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-178152" is "Ready"
	I1126 20:22:44.961781  271308 pod_ready.go:86] duration metric: took 401.291943ms for pod "kube-scheduler-default-k8s-diff-port-178152" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:22:44.961797  271308 pod_ready.go:40] duration metric: took 1.605738411s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 20:22:45.024642  271308 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1126 20:22:45.028340  271308 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-178152" cluster and "default" namespace by default
	I1126 20:22:41.130916  283132 out.go:252] * Restarting existing docker container for "newest-cni-297942" ...
	I1126 20:22:41.130973  283132 cli_runner.go:164] Run: docker start newest-cni-297942
	I1126 20:22:41.417598  283132 cli_runner.go:164] Run: docker container inspect newest-cni-297942 --format={{.State.Status}}
	I1126 20:22:41.436343  283132 kic.go:430] container "newest-cni-297942" state is running.
	I1126 20:22:41.436757  283132 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-297942
	I1126 20:22:41.454760  283132 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/newest-cni-297942/config.json ...
	I1126 20:22:41.454963  283132 machine.go:94] provisionDockerMachine start ...
	I1126 20:22:41.455014  283132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-297942
	I1126 20:22:41.473682  283132 main.go:143] libmachine: Using SSH client type: native
	I1126 20:22:41.473897  283132 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1126 20:22:41.473908  283132 main.go:143] libmachine: About to run SSH command:
	hostname
	I1126 20:22:41.474510  283132 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53014->127.0.0.1:33093: read: connection reset by peer
	I1126 20:22:44.628083  283132 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-297942
	
	I1126 20:22:44.628112  283132 ubuntu.go:182] provisioning hostname "newest-cni-297942"
	I1126 20:22:44.628888  283132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-297942
	I1126 20:22:44.654951  283132 main.go:143] libmachine: Using SSH client type: native
	I1126 20:22:44.655280  283132 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1126 20:22:44.655300  283132 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-297942 && echo "newest-cni-297942" | sudo tee /etc/hostname
	I1126 20:22:44.836325  283132 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-297942
	
	I1126 20:22:44.836408  283132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-297942
	I1126 20:22:44.860919  283132 main.go:143] libmachine: Using SSH client type: native
	I1126 20:22:44.861149  283132 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1126 20:22:44.861181  283132 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-297942' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-297942/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-297942' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1126 20:22:45.024750  283132 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1126 20:22:45.024885  283132 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21974-10722/.minikube CaCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21974-10722/.minikube}
	I1126 20:22:45.024931  283132 ubuntu.go:190] setting up certificates
	I1126 20:22:45.025025  283132 provision.go:84] configureAuth start
	I1126 20:22:45.025434  283132 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-297942
	I1126 20:22:45.053790  283132 provision.go:143] copyHostCerts
	I1126 20:22:45.054123  283132 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-10722/.minikube/ca.pem, removing ...
	I1126 20:22:45.054181  283132 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-10722/.minikube/ca.pem
	I1126 20:22:45.054621  283132 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/ca.pem (1078 bytes)
	I1126 20:22:45.054815  283132 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-10722/.minikube/cert.pem, removing ...
	I1126 20:22:45.054941  283132 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-10722/.minikube/cert.pem
	I1126 20:22:45.056077  283132 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/cert.pem (1123 bytes)
	I1126 20:22:45.056254  283132 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-10722/.minikube/key.pem, removing ...
	I1126 20:22:45.056282  283132 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-10722/.minikube/key.pem
	I1126 20:22:45.056373  283132 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/key.pem (1675 bytes)
	I1126 20:22:45.056499  283132 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem org=jenkins.newest-cni-297942 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-297942]
	I1126 20:22:45.148820  283132 provision.go:177] copyRemoteCerts
	I1126 20:22:45.148880  283132 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1126 20:22:45.148938  283132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-297942
	I1126 20:22:45.175942  283132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/newest-cni-297942/id_rsa Username:docker}
	I1126 20:22:45.287084  283132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1126 20:22:45.308935  283132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1126 20:22:45.325992  283132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1126 20:22:45.342613  283132 provision.go:87] duration metric: took 317.575317ms to configureAuth
	I1126 20:22:45.342637  283132 ubuntu.go:206] setting minikube options for container-runtime
	I1126 20:22:45.342828  283132 config.go:182] Loaded profile config "newest-cni-297942": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:22:45.342955  283132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-297942
	I1126 20:22:45.362599  283132 main.go:143] libmachine: Using SSH client type: native
	I1126 20:22:45.362913  283132 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1126 20:22:45.362936  283132 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1126 20:22:45.681202  283132 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1126 20:22:45.681227  283132 machine.go:97] duration metric: took 4.226250286s to provisionDockerMachine
	I1126 20:22:45.681240  283132 start.go:293] postStartSetup for "newest-cni-297942" (driver="docker")
	I1126 20:22:45.681252  283132 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1126 20:22:45.681306  283132 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1126 20:22:45.681356  283132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-297942
	I1126 20:22:45.705211  283132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/newest-cni-297942/id_rsa Username:docker}
	I1126 20:22:45.819521  283132 ssh_runner.go:195] Run: cat /etc/os-release
	I1126 20:22:45.823878  283132 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1126 20:22:45.823902  283132 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1126 20:22:45.823911  283132 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-10722/.minikube/addons for local assets ...
	I1126 20:22:45.823957  283132 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-10722/.minikube/files for local assets ...
	I1126 20:22:45.824019  283132 filesync.go:149] local asset: /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem -> 142582.pem in /etc/ssl/certs
	I1126 20:22:45.824103  283132 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1126 20:22:45.832396  283132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem --> /etc/ssl/certs/142582.pem (1708 bytes)
	I1126 20:22:45.855936  283132 start.go:296] duration metric: took 174.682288ms for postStartSetup
	I1126 20:22:45.856010  283132 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1126 20:22:45.856070  283132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-297942
	I1126 20:22:45.877896  283132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/newest-cni-297942/id_rsa Username:docker}
	I1126 20:22:42.153037  279050 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1126 20:22:42.157427  279050 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1126 20:22:42.158369  279050 api_server.go:141] control plane version: v1.34.1
	I1126 20:22:42.158392  279050 api_server.go:131] duration metric: took 1.005661792s to wait for apiserver health ...
	I1126 20:22:42.158401  279050 system_pods.go:43] waiting for kube-system pods to appear ...
	I1126 20:22:42.161910  279050 system_pods.go:59] 8 kube-system pods found
	I1126 20:22:42.161934  279050 system_pods.go:61] "coredns-66bc5c9577-wl4xp" [e1cf9739-1b9a-44d7-a932-447ac94e142d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:22:42.161942  279050 system_pods.go:61] "etcd-no-preload-026579" [cce16df8-4867-47fc-acec-5b1651799367] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1126 20:22:42.161952  279050 system_pods.go:61] "kindnet-8rfpj" [09d32618-edb6-49f0-b9ce-af0f0751b53f] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1126 20:22:42.161968  279050 system_pods.go:61] "kube-apiserver-no-preload-026579" [4119730b-bf6c-4d48-9883-13ca12edeabc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1126 20:22:42.161984  279050 system_pods.go:61] "kube-controller-manager-no-preload-026579" [c583b53a-df53-4d77-995f-c6c7d1189418] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1126 20:22:42.161995  279050 system_pods.go:61] "kube-proxy-ktbwp" [93566a91-b6dc-47fa-9d46-9ebf0fc4704a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1126 20:22:42.162008  279050 system_pods.go:61] "kube-scheduler-no-preload-026579" [0ae4f2d1-175e-45c0-bbb2-eedf826e6fb4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1126 20:22:42.162015  279050 system_pods.go:61] "storage-provisioner" [e2f8aa92-297b-4be4-a3a2-45a956763aad] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 20:22:42.162021  279050 system_pods.go:74] duration metric: took 3.614709ms to wait for pod list to return data ...
	I1126 20:22:42.162029  279050 default_sa.go:34] waiting for default service account to be created ...
	I1126 20:22:42.164140  279050 default_sa.go:45] found service account: "default"
	I1126 20:22:42.164157  279050 default_sa.go:55] duration metric: took 2.123726ms for default service account to be created ...
	I1126 20:22:42.164165  279050 system_pods.go:116] waiting for k8s-apps to be running ...
	I1126 20:22:42.166895  279050 system_pods.go:86] 8 kube-system pods found
	I1126 20:22:42.166923  279050 system_pods.go:89] "coredns-66bc5c9577-wl4xp" [e1cf9739-1b9a-44d7-a932-447ac94e142d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:22:42.166933  279050 system_pods.go:89] "etcd-no-preload-026579" [cce16df8-4867-47fc-acec-5b1651799367] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1126 20:22:42.166942  279050 system_pods.go:89] "kindnet-8rfpj" [09d32618-edb6-49f0-b9ce-af0f0751b53f] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1126 20:22:42.166955  279050 system_pods.go:89] "kube-apiserver-no-preload-026579" [4119730b-bf6c-4d48-9883-13ca12edeabc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1126 20:22:42.166963  279050 system_pods.go:89] "kube-controller-manager-no-preload-026579" [c583b53a-df53-4d77-995f-c6c7d1189418] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1126 20:22:42.166986  279050 system_pods.go:89] "kube-proxy-ktbwp" [93566a91-b6dc-47fa-9d46-9ebf0fc4704a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1126 20:22:42.167013  279050 system_pods.go:89] "kube-scheduler-no-preload-026579" [0ae4f2d1-175e-45c0-bbb2-eedf826e6fb4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1126 20:22:42.167025  279050 system_pods.go:89] "storage-provisioner" [e2f8aa92-297b-4be4-a3a2-45a956763aad] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 20:22:42.167036  279050 system_pods.go:126] duration metric: took 2.86619ms to wait for k8s-apps to be running ...
	I1126 20:22:42.167048  279050 system_svc.go:44] waiting for kubelet service to be running ....
	I1126 20:22:42.167096  279050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:22:42.179063  279050 system_svc.go:56] duration metric: took 12.010286ms WaitForService to wait for kubelet
	I1126 20:22:42.179086  279050 kubeadm.go:587] duration metric: took 3.056293076s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1126 20:22:42.179104  279050 node_conditions.go:102] verifying NodePressure condition ...
	I1126 20:22:42.181486  279050 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1126 20:22:42.181505  279050 node_conditions.go:123] node cpu capacity is 8
	I1126 20:22:42.181517  279050 node_conditions.go:105] duration metric: took 2.408547ms to run NodePressure ...
	I1126 20:22:42.181527  279050 start.go:242] waiting for startup goroutines ...
	I1126 20:22:42.181536  279050 start.go:247] waiting for cluster config update ...
	I1126 20:22:42.181545  279050 start.go:256] writing updated cluster config ...
	I1126 20:22:42.181758  279050 ssh_runner.go:195] Run: rm -f paused
	I1126 20:22:42.185430  279050 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 20:22:42.188391  279050 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wl4xp" in "kube-system" namespace to be "Ready" or be gone ...
	W1126 20:22:44.193372  279050 pod_ready.go:104] pod "coredns-66bc5c9577-wl4xp" is not "Ready", error: <nil>
	W1126 20:22:46.193941  279050 pod_ready.go:104] pod "coredns-66bc5c9577-wl4xp" is not "Ready", error: <nil>
	I1126 20:22:44.775191  281230 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1126 20:22:44.775236  281230 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1126 20:22:44.775284  281230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-949294
	I1126 20:22:44.802026  281230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/embed-certs-949294/id_rsa Username:docker}
	I1126 20:22:44.804445  281230 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1126 20:22:44.804510  281230 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1126 20:22:44.804668  281230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-949294
	I1126 20:22:44.809300  281230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/embed-certs-949294/id_rsa Username:docker}
	I1126 20:22:44.836635  281230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/embed-certs-949294/id_rsa Username:docker}
	I1126 20:22:44.906402  281230 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:22:44.926683  281230 node_ready.go:35] waiting up to 6m0s for node "embed-certs-949294" to be "Ready" ...
	I1126 20:22:44.942037  281230 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:22:44.943189  281230 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1126 20:22:44.943289  281230 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1126 20:22:44.958004  281230 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1126 20:22:44.964275  281230 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1126 20:22:44.964293  281230 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1126 20:22:44.988499  281230 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1126 20:22:44.988525  281230 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1126 20:22:45.008309  281230 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1126 20:22:45.008331  281230 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1126 20:22:45.030026  281230 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1126 20:22:45.030061  281230 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1126 20:22:45.054222  281230 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1126 20:22:45.054247  281230 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1126 20:22:45.075321  281230 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1126 20:22:45.075344  281230 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1126 20:22:45.092705  281230 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1126 20:22:45.092729  281230 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1126 20:22:45.109718  281230 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1126 20:22:45.109739  281230 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1126 20:22:45.123556  281230 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1126 20:22:46.834584  281230 node_ready.go:49] node "embed-certs-949294" is "Ready"
	I1126 20:22:46.834631  281230 node_ready.go:38] duration metric: took 1.907908732s for node "embed-certs-949294" to be "Ready" ...
	I1126 20:22:46.834647  281230 api_server.go:52] waiting for apiserver process to appear ...
	I1126 20:22:46.834802  281230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:22:47.646270  281230 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.704087312s)
	I1126 20:22:47.646325  281230 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.688236386s)
	I1126 20:22:47.646452  281230 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.522860781s)
	I1126 20:22:47.646922  281230 api_server.go:72] duration metric: took 2.906037516s to wait for apiserver process to appear ...
	I1126 20:22:47.646942  281230 api_server.go:88] waiting for apiserver healthz status ...
	I1126 20:22:47.646959  281230 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1126 20:22:47.650745  281230 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-949294 addons enable metrics-server
	
	I1126 20:22:45.981988  283132 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1126 20:22:45.987899  283132 fix.go:56] duration metric: took 4.876561031s for fixHost
	I1126 20:22:45.987927  283132 start.go:83] releasing machines lock for "newest-cni-297942", held for 4.876610638s
	I1126 20:22:45.987992  283132 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-297942
	I1126 20:22:46.011274  283132 ssh_runner.go:195] Run: cat /version.json
	I1126 20:22:46.011335  283132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-297942
	I1126 20:22:46.011553  283132 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1126 20:22:46.011634  283132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-297942
	I1126 20:22:46.035874  283132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/newest-cni-297942/id_rsa Username:docker}
	I1126 20:22:46.038422  283132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/newest-cni-297942/id_rsa Username:docker}
	I1126 20:22:46.145928  283132 ssh_runner.go:195] Run: systemctl --version
	I1126 20:22:46.208754  283132 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1126 20:22:46.260685  283132 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1126 20:22:46.266786  283132 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1126 20:22:46.266850  283132 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1126 20:22:46.279170  283132 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1126 20:22:46.279196  283132 start.go:496] detecting cgroup driver to use...
	I1126 20:22:46.279228  283132 detect.go:190] detected "systemd" cgroup driver on host os
	I1126 20:22:46.279279  283132 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1126 20:22:46.296769  283132 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1126 20:22:46.312842  283132 docker.go:218] disabling cri-docker service (if available) ...
	I1126 20:22:46.313623  283132 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1126 20:22:46.336404  283132 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1126 20:22:46.362833  283132 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1126 20:22:46.485694  283132 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1126 20:22:46.608625  283132 docker.go:234] disabling docker service ...
	I1126 20:22:46.608710  283132 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1126 20:22:46.627969  283132 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1126 20:22:46.647325  283132 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1126 20:22:46.777835  283132 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1126 20:22:46.941504  283132 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1126 20:22:46.960693  283132 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1126 20:22:46.980499  283132 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1126 20:22:46.980558  283132 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:22:46.994995  283132 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1126 20:22:46.995161  283132 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:22:47.007396  283132 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:22:47.019337  283132 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:22:47.031265  283132 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1126 20:22:47.041699  283132 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:22:47.052215  283132 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:22:47.063748  283132 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:22:47.075564  283132 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1126 20:22:47.087066  283132 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1126 20:22:47.098156  283132 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:22:47.230987  283132 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1126 20:22:47.533145  283132 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1126 20:22:47.533212  283132 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1126 20:22:47.539562  283132 start.go:564] Will wait 60s for crictl version
	I1126 20:22:47.539619  283132 ssh_runner.go:195] Run: which crictl
	I1126 20:22:47.545726  283132 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1126 20:22:47.577381  283132 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1126 20:22:47.577482  283132 ssh_runner.go:195] Run: crio --version
	I1126 20:22:47.614544  283132 ssh_runner.go:195] Run: crio --version
	I1126 20:22:47.654164  283132 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1126 20:22:47.652263  281230 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1126 20:22:47.652284  281230 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1126 20:22:47.661749  281230 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1126 20:22:47.655252  283132 cli_runner.go:164] Run: docker network inspect newest-cni-297942 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1126 20:22:47.676378  283132 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1126 20:22:47.681380  283132 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:22:47.696551  283132 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1126 20:22:47.697725  283132 kubeadm.go:884] updating cluster {Name:newest-cni-297942 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-297942 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1126 20:22:47.697864  283132 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:22:47.697953  283132 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:22:47.737614  283132 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:22:47.737644  283132 crio.go:433] Images already preloaded, skipping extraction
	I1126 20:22:47.737710  283132 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:22:47.769807  283132 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:22:47.769838  283132 cache_images.go:86] Images are preloaded, skipping loading
	I1126 20:22:47.769848  283132 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1126 20:22:47.769987  283132 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-297942 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-297942 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1126 20:22:47.770072  283132 ssh_runner.go:195] Run: crio config
	I1126 20:22:47.833805  283132 cni.go:84] Creating CNI manager for ""
	I1126 20:22:47.833849  283132 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:22:47.833867  283132 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1126 20:22:47.833903  283132 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-297942 NodeName:newest-cni-297942 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1126 20:22:47.834082  283132 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-297942"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1126 20:22:47.834169  283132 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1126 20:22:47.843484  283132 binaries.go:51] Found k8s binaries, skipping transfer
	I1126 20:22:47.843547  283132 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1126 20:22:47.853856  283132 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1126 20:22:47.868846  283132 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1126 20:22:47.885385  283132 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1126 20:22:47.903633  283132 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1126 20:22:47.908802  283132 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:22:47.922224  283132 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:22:48.037628  283132 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:22:48.069247  283132 certs.go:69] Setting up /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/newest-cni-297942 for IP: 192.168.103.2
	I1126 20:22:48.069272  283132 certs.go:195] generating shared ca certs ...
	I1126 20:22:48.069292  283132 certs.go:227] acquiring lock for ca certs: {Name:mk4795cc985d88cb969cdd6a9c35d3c72f02dfea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:22:48.069497  283132 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21974-10722/.minikube/ca.key
	I1126 20:22:48.069570  283132 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.key
	I1126 20:22:48.069587  283132 certs.go:257] generating profile certs ...
	I1126 20:22:48.069711  283132 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/newest-cni-297942/client.key
	I1126 20:22:48.069784  283132 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/newest-cni-297942/apiserver.key.9b9f8b84
	I1126 20:22:48.069880  283132 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/newest-cni-297942/proxy-client.key
	I1126 20:22:48.070067  283132 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/14258.pem (1338 bytes)
	W1126 20:22:48.070122  283132 certs.go:480] ignoring /home/jenkins/minikube-integration/21974-10722/.minikube/certs/14258_empty.pem, impossibly tiny 0 bytes
	I1126 20:22:48.070133  283132 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem (1679 bytes)
	I1126 20:22:48.070169  283132 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem (1078 bytes)
	I1126 20:22:48.070199  283132 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem (1123 bytes)
	I1126 20:22:48.070235  283132 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem (1675 bytes)
	I1126 20:22:48.070293  283132 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem (1708 bytes)
	I1126 20:22:48.071194  283132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1126 20:22:48.097890  283132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1126 20:22:48.121561  283132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1126 20:22:48.146613  283132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1126 20:22:48.176193  283132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/newest-cni-297942/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1126 20:22:48.202051  283132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/newest-cni-297942/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1126 20:22:48.225070  283132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/newest-cni-297942/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1126 20:22:48.246760  283132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/newest-cni-297942/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1126 20:22:48.269084  283132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem --> /usr/share/ca-certificates/142582.pem (1708 bytes)
	I1126 20:22:48.292062  283132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1126 20:22:48.313735  283132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/certs/14258.pem --> /usr/share/ca-certificates/14258.pem (1338 bytes)
	I1126 20:22:48.335657  283132 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1126 20:22:48.351074  283132 ssh_runner.go:195] Run: openssl version
	I1126 20:22:48.358937  283132 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142582.pem && ln -fs /usr/share/ca-certificates/142582.pem /etc/ssl/certs/142582.pem"
	I1126 20:22:48.369856  283132 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142582.pem
	I1126 20:22:48.375367  283132 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 26 19:41 /usr/share/ca-certificates/142582.pem
	I1126 20:22:48.375419  283132 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142582.pem
	I1126 20:22:48.428766  283132 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142582.pem /etc/ssl/certs/3ec20f2e.0"
	I1126 20:22:48.439674  283132 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1126 20:22:48.450900  283132 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:22:48.455705  283132 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 26 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:22:48.455757  283132 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:22:48.509707  283132 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1126 20:22:48.520864  283132 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14258.pem && ln -fs /usr/share/ca-certificates/14258.pem /etc/ssl/certs/14258.pem"
	I1126 20:22:48.532096  283132 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14258.pem
	I1126 20:22:48.536714  283132 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 26 19:41 /usr/share/ca-certificates/14258.pem
	I1126 20:22:48.536763  283132 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14258.pem
	I1126 20:22:48.592642  283132 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14258.pem /etc/ssl/certs/51391683.0"
	I1126 20:22:48.602562  283132 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1126 20:22:48.607725  283132 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1126 20:22:48.668271  283132 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1126 20:22:48.723058  283132 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1126 20:22:48.766993  283132 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1126 20:22:48.809051  283132 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1126 20:22:48.869800  283132 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1126 20:22:48.933325  283132 kubeadm.go:401] StartCluster: {Name:newest-cni-297942 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-297942 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:22:48.933433  283132 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 20:22:48.933507  283132 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 20:22:48.969182  283132 cri.go:89] found id: ""
	I1126 20:22:48.969273  283132 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1126 20:22:48.980080  283132 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1126 20:22:48.980099  283132 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1126 20:22:48.980145  283132 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1126 20:22:48.990153  283132 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1126 20:22:48.991382  283132 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-297942" does not appear in /home/jenkins/minikube-integration/21974-10722/kubeconfig
	I1126 20:22:48.992253  283132 kubeconfig.go:62] /home/jenkins/minikube-integration/21974-10722/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-297942" cluster setting kubeconfig missing "newest-cni-297942" context setting]
	I1126 20:22:48.993562  283132 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/kubeconfig: {Name:mk6fc897a3190afd853d8f239c80394df10dbd39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:22:48.995871  283132 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1126 20:22:49.006243  283132 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1126 20:22:49.006272  283132 kubeadm.go:602] duration metric: took 26.166791ms to restartPrimaryControlPlane
	I1126 20:22:49.006282  283132 kubeadm.go:403] duration metric: took 72.966028ms to StartCluster
	I1126 20:22:49.006297  283132 settings.go:142] acquiring lock: {Name:mkeab3f1dbcfb88a16fda32bff13c520a9a811ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:22:49.006353  283132 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21974-10722/kubeconfig
	I1126 20:22:49.008962  283132 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/kubeconfig: {Name:mk6fc897a3190afd853d8f239c80394df10dbd39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:22:49.010081  283132 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1126 20:22:49.010330  283132 config.go:182] Loaded profile config "newest-cni-297942": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:22:49.010385  283132 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1126 20:22:49.010493  283132 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-297942"
	I1126 20:22:49.010512  283132 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-297942"
	W1126 20:22:49.010523  283132 addons.go:248] addon storage-provisioner should already be in state true
	I1126 20:22:49.010550  283132 host.go:66] Checking if "newest-cni-297942" exists ...
	I1126 20:22:49.010793  283132 addons.go:70] Setting dashboard=true in profile "newest-cni-297942"
	I1126 20:22:49.010822  283132 addons.go:70] Setting default-storageclass=true in profile "newest-cni-297942"
	I1126 20:22:49.010829  283132 addons.go:239] Setting addon dashboard=true in "newest-cni-297942"
	W1126 20:22:49.010840  283132 addons.go:248] addon dashboard should already be in state true
	I1126 20:22:49.010844  283132 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-297942"
	I1126 20:22:49.010864  283132 host.go:66] Checking if "newest-cni-297942" exists ...
	I1126 20:22:49.011039  283132 cli_runner.go:164] Run: docker container inspect newest-cni-297942 --format={{.State.Status}}
	I1126 20:22:49.011163  283132 cli_runner.go:164] Run: docker container inspect newest-cni-297942 --format={{.State.Status}}
	I1126 20:22:49.011281  283132 cli_runner.go:164] Run: docker container inspect newest-cni-297942 --format={{.State.Status}}
	I1126 20:22:49.039942  283132 addons.go:239] Setting addon default-storageclass=true in "newest-cni-297942"
	W1126 20:22:49.039969  283132 addons.go:248] addon default-storageclass should already be in state true
	I1126 20:22:49.039995  283132 host.go:66] Checking if "newest-cni-297942" exists ...
	I1126 20:22:49.040473  283132 cli_runner.go:164] Run: docker container inspect newest-cni-297942 --format={{.State.Status}}
	I1126 20:22:49.062659  283132 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1126 20:22:49.062681  283132 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1126 20:22:49.062734  283132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-297942
	I1126 20:22:49.071753  283132 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1126 20:22:49.071754  283132 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1126 20:22:49.071760  283132 out.go:179] * Verifying Kubernetes components...
	I1126 20:22:49.083205  283132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/newest-cni-297942/id_rsa Username:docker}
	I1126 20:22:49.093615  283132 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:22:49.093646  283132 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1126 20:22:49.093716  283132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-297942
	I1126 20:22:49.094772  283132 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:22:49.095752  283132 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1126 20:22:49.098197  283132 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1126 20:22:49.098216  283132 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1126 20:22:49.098302  283132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-297942
	I1126 20:22:49.120042  283132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/newest-cni-297942/id_rsa Username:docker}
	I1126 20:22:49.124517  283132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/newest-cni-297942/id_rsa Username:docker}
	I1126 20:22:49.223673  283132 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1126 20:22:49.233917  283132 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:22:49.244980  283132 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:22:49.257038  283132 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1126 20:22:49.257061  283132 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1126 20:22:49.295636  283132 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1126 20:22:49.295664  283132 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	W1126 20:22:49.312492  283132 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1126 20:22:49.312533  283132 retry.go:31] will retry after 141.575876ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1126 20:22:49.312612  283132 api_server.go:52] waiting for apiserver process to appear ...
	I1126 20:22:49.312669  283132 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:22:49.321556  283132 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1126 20:22:49.321592  283132 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	W1126 20:22:49.344947  283132 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1126 20:22:49.344982  283132 retry.go:31] will retry after 218.049714ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1126 20:22:49.345028  283132 api_server.go:72] duration metric: took 334.915012ms to wait for apiserver process to appear ...
	I1126 20:22:49.345038  283132 api_server.go:88] waiting for apiserver healthz status ...
	I1126 20:22:49.345054  283132 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1126 20:22:49.345834  283132 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1126 20:22:49.345938  283132 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1126 20:22:49.346111  283132 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1126 20:22:49.369397  283132 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1126 20:22:49.369420  283132 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1126 20:22:49.390504  283132 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1126 20:22:49.390683  283132 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1126 20:22:49.408410  283132 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1126 20:22:49.408441  283132 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1126 20:22:49.426482  283132 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1126 20:22:49.426503  283132 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1126 20:22:49.442793  283132 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1126 20:22:49.442870  283132 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1126 20:22:49.454437  283132 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1126 20:22:49.461179  283132 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1126 20:22:49.563685  283132 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:22:49.845496  283132 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	W1126 20:22:48.197694  279050 pod_ready.go:104] pod "coredns-66bc5c9577-wl4xp" is not "Ready", error: <nil>
	W1126 20:22:50.201188  279050 pod_ready.go:104] pod "coredns-66bc5c9577-wl4xp" is not "Ready", error: <nil>
	I1126 20:22:51.277974  283132 api_server.go:279] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1126 20:22:51.278018  283132 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1126 20:22:51.278039  283132 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1126 20:22:51.287748  283132 api_server.go:279] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1126 20:22:51.287777  283132 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1126 20:22:51.345992  283132 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1126 20:22:51.353164  283132 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1126 20:22:51.353197  283132 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1126 20:22:51.403236  283132 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (1.948765551s)
	I1126 20:22:51.845876  283132 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1126 20:22:51.854352  283132 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1126 20:22:51.854381  283132 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1126 20:22:51.937991  283132 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.476761053s)
	I1126 20:22:51.940235  283132 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-297942 addons enable metrics-server
	
	I1126 20:22:52.048989  283132 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.485263917s)
	I1126 20:22:52.050773  283132 out.go:179] * Enabled addons: default-storageclass, dashboard, storage-provisioner
	I1126 20:22:47.665529  281230 addons.go:530] duration metric: took 2.924403622s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1126 20:22:48.147073  281230 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1126 20:22:48.153314  281230 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1126 20:22:48.154522  281230 api_server.go:141] control plane version: v1.34.1
	I1126 20:22:48.154551  281230 api_server.go:131] duration metric: took 507.601137ms to wait for apiserver health ...
	I1126 20:22:48.154562  281230 system_pods.go:43] waiting for kube-system pods to appear ...
	I1126 20:22:48.159761  281230 system_pods.go:59] 8 kube-system pods found
	I1126 20:22:48.159808  281230 system_pods.go:61] "coredns-66bc5c9577-s8rrr" [fb1d5777-ea89-40cb-ae5c-dd8bde47f3de] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:22:48.159819  281230 system_pods.go:61] "etcd-embed-certs-949294" [ae44b779-edef-49c6-8b68-b6114e9a4d68] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1126 20:22:48.159827  281230 system_pods.go:61] "kindnet-9546l" [5f44d5e6-677c-4df7-9534-bfdf1e6b06b4] Running
	I1126 20:22:48.159836  281230 system_pods.go:61] "kube-apiserver-embed-certs-949294" [25d397fc-5140-407f-80f8-e05072c2d44b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1126 20:22:48.159858  281230 system_pods.go:61] "kube-controller-manager-embed-certs-949294" [9f206785-00f8-4fb7-a52c-95ea02516271] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1126 20:22:48.159867  281230 system_pods.go:61] "kube-proxy-qnjvr" [d9dba8a9-9c13-46e2-9ada-a2b8daca8d73] Running
	I1126 20:22:48.159875  281230 system_pods.go:61] "kube-scheduler-embed-certs-949294" [26c4da52-2b0a-4a83-9549-de780ffeddb0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1126 20:22:48.159880  281230 system_pods.go:61] "storage-provisioner" [ad12d5e5-d681-4dfc-9970-d2340ac55ed7] Running
	I1126 20:22:48.159888  281230 system_pods.go:74] duration metric: took 5.318838ms to wait for pod list to return data ...
	I1126 20:22:48.159896  281230 default_sa.go:34] waiting for default service account to be created ...
	I1126 20:22:48.163237  281230 default_sa.go:45] found service account: "default"
	I1126 20:22:48.163425  281230 default_sa.go:55] duration metric: took 3.520246ms for default service account to be created ...
	I1126 20:22:48.163453  281230 system_pods.go:116] waiting for k8s-apps to be running ...
	I1126 20:22:48.167512  281230 system_pods.go:86] 8 kube-system pods found
	I1126 20:22:48.168002  281230 system_pods.go:89] "coredns-66bc5c9577-s8rrr" [fb1d5777-ea89-40cb-ae5c-dd8bde47f3de] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:22:48.168069  281230 system_pods.go:89] "etcd-embed-certs-949294" [ae44b779-edef-49c6-8b68-b6114e9a4d68] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1126 20:22:48.168093  281230 system_pods.go:89] "kindnet-9546l" [5f44d5e6-677c-4df7-9534-bfdf1e6b06b4] Running
	I1126 20:22:48.168114  281230 system_pods.go:89] "kube-apiserver-embed-certs-949294" [25d397fc-5140-407f-80f8-e05072c2d44b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1126 20:22:48.168149  281230 system_pods.go:89] "kube-controller-manager-embed-certs-949294" [9f206785-00f8-4fb7-a52c-95ea02516271] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1126 20:22:48.168176  281230 system_pods.go:89] "kube-proxy-qnjvr" [d9dba8a9-9c13-46e2-9ada-a2b8daca8d73] Running
	I1126 20:22:48.168197  281230 system_pods.go:89] "kube-scheduler-embed-certs-949294" [26c4da52-2b0a-4a83-9549-de780ffeddb0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1126 20:22:48.168213  281230 system_pods.go:89] "storage-provisioner" [ad12d5e5-d681-4dfc-9970-d2340ac55ed7] Running
	I1126 20:22:48.168233  281230 system_pods.go:126] duration metric: took 4.719858ms to wait for k8s-apps to be running ...
	I1126 20:22:48.168284  281230 system_svc.go:44] waiting for kubelet service to be running ....
	I1126 20:22:48.168353  281230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:22:48.189258  281230 system_svc.go:56] duration metric: took 20.967364ms WaitForService to wait for kubelet
	I1126 20:22:48.189288  281230 kubeadm.go:587] duration metric: took 3.448403882s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1126 20:22:48.189311  281230 node_conditions.go:102] verifying NodePressure condition ...
	I1126 20:22:48.194077  281230 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1126 20:22:48.194116  281230 node_conditions.go:123] node cpu capacity is 8
	I1126 20:22:48.194135  281230 node_conditions.go:105] duration metric: took 4.818329ms to run NodePressure ...
	I1126 20:22:48.194150  281230 start.go:242] waiting for startup goroutines ...
	I1126 20:22:48.194164  281230 start.go:247] waiting for cluster config update ...
	I1126 20:22:48.194178  281230 start.go:256] writing updated cluster config ...
	I1126 20:22:48.194454  281230 ssh_runner.go:195] Run: rm -f paused
	I1126 20:22:48.199326  281230 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 20:22:48.204363  281230 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-s8rrr" in "kube-system" namespace to be "Ready" or be gone ...
	W1126 20:22:50.231611  281230 pod_ready.go:104] pod "coredns-66bc5c9577-s8rrr" is not "Ready", error: <nil>
	I1126 20:22:52.051919  283132 addons.go:530] duration metric: took 3.041532347s for enable addons: enabled=[default-storageclass dashboard storage-provisioner]
	I1126 20:22:52.345587  283132 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1126 20:22:52.350543  283132 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1126 20:22:52.350570  283132 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1126 20:22:52.846025  283132 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1126 20:22:52.851313  283132 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1126 20:22:52.852557  283132 api_server.go:141] control plane version: v1.34.1
	I1126 20:22:52.852582  283132 api_server.go:131] duration metric: took 3.507536375s to wait for apiserver health ...
	I1126 20:22:52.852593  283132 system_pods.go:43] waiting for kube-system pods to appear ...
	I1126 20:22:52.856745  283132 system_pods.go:59] 8 kube-system pods found
	I1126 20:22:52.856818  283132 system_pods.go:61] "coredns-66bc5c9577-bnszr" [ddf077eb-a9c4-42f2-a9b7-0aced551aa38] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1126 20:22:52.856864  283132 system_pods.go:61] "etcd-newest-cni-297942" [6520dcdd-9b71-4c83-8e54-7421dd7034af] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1126 20:22:52.856881  283132 system_pods.go:61] "kindnet-wlhp7" [a6a459a7-87d9-4628-ad09-7e6e8d8445da] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1126 20:22:52.856908  283132 system_pods.go:61] "kube-apiserver-newest-cni-297942" [7c910df8-6020-46fb-a380-09a0698b3720] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1126 20:22:52.856922  283132 system_pods.go:61] "kube-controller-manager-newest-cni-297942" [66f96670-85f0-47d1-859b-4844b80909d4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1126 20:22:52.856931  283132 system_pods.go:61] "kube-proxy-lx6vw" [6e8b3fed-5b44-42fd-9259-f59bfc5b1f0c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1126 20:22:52.856939  283132 system_pods.go:61] "kube-scheduler-newest-cni-297942" [4d59e692-80ac-4baa-9316-d8930f423531] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1126 20:22:52.856947  283132 system_pods.go:61] "storage-provisioner" [815d8b30-f9a4-4565-9f15-f45940446bd1] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1126 20:22:52.856955  283132 system_pods.go:74] duration metric: took 4.355286ms to wait for pod list to return data ...
	I1126 20:22:52.856965  283132 default_sa.go:34] waiting for default service account to be created ...
	I1126 20:22:52.859730  283132 default_sa.go:45] found service account: "default"
	I1126 20:22:52.859762  283132 default_sa.go:55] duration metric: took 2.779407ms for default service account to be created ...
	I1126 20:22:52.859775  283132 kubeadm.go:587] duration metric: took 3.849662669s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1126 20:22:52.859793  283132 node_conditions.go:102] verifying NodePressure condition ...
	I1126 20:22:52.862559  283132 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1126 20:22:52.862585  283132 node_conditions.go:123] node cpu capacity is 8
	I1126 20:22:52.862603  283132 node_conditions.go:105] duration metric: took 2.80479ms to run NodePressure ...
	I1126 20:22:52.862617  283132 start.go:242] waiting for startup goroutines ...
	I1126 20:22:52.862626  283132 start.go:247] waiting for cluster config update ...
	I1126 20:22:52.862639  283132 start.go:256] writing updated cluster config ...
	I1126 20:22:52.863068  283132 ssh_runner.go:195] Run: rm -f paused
	I1126 20:22:52.938360  283132 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1126 20:22:52.940104  283132 out.go:179] * Done! kubectl is now configured to use "newest-cni-297942" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 26 20:22:52 newest-cni-297942 crio[521]: time="2025-11-26T20:22:52.467486438Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:22:52 newest-cni-297942 crio[521]: time="2025-11-26T20:22:52.473068167Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=42980c7b-bee7-482e-95b7-ef2927b19ae3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 26 20:22:52 newest-cni-297942 crio[521]: time="2025-11-26T20:22:52.473969937Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=deb7e6c2-9d46-4f41-9e28-4820819a4a06 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 26 20:22:52 newest-cni-297942 crio[521]: time="2025-11-26T20:22:52.47585817Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 26 20:22:52 newest-cni-297942 crio[521]: time="2025-11-26T20:22:52.476580163Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 26 20:22:52 newest-cni-297942 crio[521]: time="2025-11-26T20:22:52.476734369Z" level=info msg="Ran pod sandbox 5801de75823a0e6217a434dce6ba4077162377b24ceb13370b3d512ac33700dc with infra container: kube-system/kindnet-wlhp7/POD" id=deb7e6c2-9d46-4f41-9e28-4820819a4a06 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 26 20:22:52 newest-cni-297942 crio[521]: time="2025-11-26T20:22:52.477497012Z" level=info msg="Ran pod sandbox a3a69b0c4857c0198f6c9f17b7f83b9ff78520fc6a633e61883c9473ad0a96bd with infra container: kube-system/kube-proxy-lx6vw/POD" id=42980c7b-bee7-482e-95b7-ef2927b19ae3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 26 20:22:52 newest-cni-297942 crio[521]: time="2025-11-26T20:22:52.478198147Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=52293e5d-09a9-42c2-914c-c253017c66b6 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:22:52 newest-cni-297942 crio[521]: time="2025-11-26T20:22:52.478403343Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=f02276b5-1a2d-42ac-88a3-e9d3bd76fb5c name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:22:52 newest-cni-297942 crio[521]: time="2025-11-26T20:22:52.479654378Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=5ce84deb-7532-4eb8-bd61-cad169f6dc3a name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:22:52 newest-cni-297942 crio[521]: time="2025-11-26T20:22:52.480221628Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=be227d1c-8839-4482-be29-31307e285e15 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:22:52 newest-cni-297942 crio[521]: time="2025-11-26T20:22:52.480967306Z" level=info msg="Creating container: kube-system/kindnet-wlhp7/kindnet-cni" id=e82dd60f-f707-41f8-93a8-d271adb935ef name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:22:52 newest-cni-297942 crio[521]: time="2025-11-26T20:22:52.481058468Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:22:52 newest-cni-297942 crio[521]: time="2025-11-26T20:22:52.481255877Z" level=info msg="Creating container: kube-system/kube-proxy-lx6vw/kube-proxy" id=a4957c48-3910-4123-9c9f-593fdbb5b8e6 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:22:52 newest-cni-297942 crio[521]: time="2025-11-26T20:22:52.481361685Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:22:52 newest-cni-297942 crio[521]: time="2025-11-26T20:22:52.486448795Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:22:52 newest-cni-297942 crio[521]: time="2025-11-26T20:22:52.487260579Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:22:52 newest-cni-297942 crio[521]: time="2025-11-26T20:22:52.48966009Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:22:52 newest-cni-297942 crio[521]: time="2025-11-26T20:22:52.490349305Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:22:52 newest-cni-297942 crio[521]: time="2025-11-26T20:22:52.514407584Z" level=info msg="Created container 0bde763b33e49c60ef0ea171446deb3af31ee6ffb05e6de302bf2264a5ab2874: kube-system/kindnet-wlhp7/kindnet-cni" id=e82dd60f-f707-41f8-93a8-d271adb935ef name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:22:52 newest-cni-297942 crio[521]: time="2025-11-26T20:22:52.515176503Z" level=info msg="Starting container: 0bde763b33e49c60ef0ea171446deb3af31ee6ffb05e6de302bf2264a5ab2874" id=17d731e5-b509-41d9-824d-70c371b85119 name=/runtime.v1.RuntimeService/StartContainer
	Nov 26 20:22:52 newest-cni-297942 crio[521]: time="2025-11-26T20:22:52.517591988Z" level=info msg="Started container" PID=1049 containerID=0bde763b33e49c60ef0ea171446deb3af31ee6ffb05e6de302bf2264a5ab2874 description=kube-system/kindnet-wlhp7/kindnet-cni id=17d731e5-b509-41d9-824d-70c371b85119 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5801de75823a0e6217a434dce6ba4077162377b24ceb13370b3d512ac33700dc
	Nov 26 20:22:52 newest-cni-297942 crio[521]: time="2025-11-26T20:22:52.521902042Z" level=info msg="Created container 79a3e043f4e7140d7b0ca23ced001e62aaef6584f996e79c9766818d3dc88c0c: kube-system/kube-proxy-lx6vw/kube-proxy" id=a4957c48-3910-4123-9c9f-593fdbb5b8e6 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:22:52 newest-cni-297942 crio[521]: time="2025-11-26T20:22:52.522822809Z" level=info msg="Starting container: 79a3e043f4e7140d7b0ca23ced001e62aaef6584f996e79c9766818d3dc88c0c" id=3cb4b460-83e1-446f-b888-bb24002e2b29 name=/runtime.v1.RuntimeService/StartContainer
	Nov 26 20:22:52 newest-cni-297942 crio[521]: time="2025-11-26T20:22:52.526245991Z" level=info msg="Started container" PID=1050 containerID=79a3e043f4e7140d7b0ca23ced001e62aaef6584f996e79c9766818d3dc88c0c description=kube-system/kube-proxy-lx6vw/kube-proxy id=3cb4b460-83e1-446f-b888-bb24002e2b29 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a3a69b0c4857c0198f6c9f17b7f83b9ff78520fc6a633e61883c9473ad0a96bd
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	79a3e043f4e71       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   4 seconds ago       Running             kube-proxy                1                   a3a69b0c4857c       kube-proxy-lx6vw                            kube-system
	0bde763b33e49       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   4 seconds ago       Running             kindnet-cni               1                   5801de75823a0       kindnet-wlhp7                               kube-system
	0b56ca8f427ee       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   7 seconds ago       Running             etcd                      1                   b883884666259       etcd-newest-cni-297942                      kube-system
	c6b80fb157b18       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   7 seconds ago       Running             kube-apiserver            1                   3a058b020d993       kube-apiserver-newest-cni-297942            kube-system
	db4659bb85541       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   7 seconds ago       Running             kube-scheduler            1                   2ee8599e03cda       kube-scheduler-newest-cni-297942            kube-system
	cc32d8959eb06       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   7 seconds ago       Running             kube-controller-manager   1                   416c19c6efc84       kube-controller-manager-newest-cni-297942   kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-297942
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-297942
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970
	                    minikube.k8s.io/name=newest-cni-297942
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_26T20_22_30_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 26 Nov 2025 20:22:27 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-297942
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 26 Nov 2025 20:22:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 26 Nov 2025 20:22:51 +0000   Wed, 26 Nov 2025 20:22:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 26 Nov 2025 20:22:51 +0000   Wed, 26 Nov 2025 20:22:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 26 Nov 2025 20:22:51 +0000   Wed, 26 Nov 2025 20:22:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 26 Nov 2025 20:22:51 +0000   Wed, 26 Nov 2025 20:22:25 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    newest-cni-297942
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                8cbd9667-abfd-484d-8f07-0a0070bb411f
	  Boot ID:                    4ab83601-3d95-4490-96bc-14b416ba2714
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-297942                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         28s
	  kube-system                 kindnet-wlhp7                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      23s
	  kube-system                 kube-apiserver-newest-cni-297942             250m (3%)     0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-controller-manager-newest-cni-297942    200m (2%)     0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-proxy-lx6vw                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	  kube-system                 kube-scheduler-newest-cni-297942             100m (1%)     0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 21s   kube-proxy       
	  Normal  Starting                 4s    kube-proxy       
	  Normal  Starting                 28s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  28s   kubelet          Node newest-cni-297942 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28s   kubelet          Node newest-cni-297942 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28s   kubelet          Node newest-cni-297942 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           23s   node-controller  Node newest-cni-297942 event: Registered Node newest-cni-297942 in Controller
	  Normal  RegisteredNode           3s    node-controller  Node newest-cni-297942 event: Registered Node newest-cni-297942 in Controller
	
	
	==> dmesg <==
	[  +0.091832] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023778] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.010162] kauditd_printk_skb: 47 callbacks suppressed
	[Nov26 19:37] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.011177] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.022896] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.024869] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.022915] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.023864] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +2.047793] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +4.032568] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +8.126198] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[ +16.382331] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[Nov26 19:38] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	
	
	==> etcd [0b56ca8f427ee8a88caaf56897449390deb07473ded1db55b4a4435b4a244998] <==
	{"level":"warn","ts":"2025-11-26T20:22:50.401180Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:50.409484Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:50.419540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:50.432103Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:50.444199Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:50.454651Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:50.462046Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:50.471417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:50.489665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:50.498971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:50.506892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:50.515617Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:50.524331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:50.531857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:50.540806Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:50.549096Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:50.565279Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:50.573849Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:50.581924Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:50.590550Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:50.598973Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:50.612530Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:50.622250Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:50.630887Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:50.720582Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45120","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:22:57 up  1:05,  0 user,  load average: 4.65, 3.35, 2.15
	Linux newest-cni-297942 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0bde763b33e49c60ef0ea171446deb3af31ee6ffb05e6de302bf2264a5ab2874] <==
	I1126 20:22:52.767174       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1126 20:22:52.767412       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1126 20:22:52.767522       1 main.go:148] setting mtu 1500 for CNI 
	I1126 20:22:52.767541       1 main.go:178] kindnetd IP family: "ipv4"
	I1126 20:22:52.767551       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-26T20:22:52Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1126 20:22:52.970805       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1126 20:22:52.970833       1 controller.go:381] "Waiting for informer caches to sync"
	I1126 20:22:52.970843       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1126 20:22:52.971186       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1126 20:22:53.370920       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1126 20:22:53.370952       1 metrics.go:72] Registering metrics
	I1126 20:22:53.371001       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [c6b80fb157b1876098abb174101b5cbb41aab43e97ca6bbab5097ef0385c3b13] <==
	I1126 20:22:51.351944       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1126 20:22:51.351993       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1126 20:22:51.352013       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1126 20:22:51.353782       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1126 20:22:51.356339       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1126 20:22:51.358415       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1126 20:22:51.359210       1 aggregator.go:171] initial CRD sync complete...
	I1126 20:22:51.359223       1 autoregister_controller.go:144] Starting autoregister controller
	I1126 20:22:51.359229       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1126 20:22:51.359236       1 cache.go:39] Caches are synced for autoregister controller
	I1126 20:22:51.353795       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1126 20:22:51.353819       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1126 20:22:51.409942       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1126 20:22:51.411489       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1126 20:22:51.773359       1 controller.go:667] quota admission added evaluator for: namespaces
	I1126 20:22:51.811350       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1126 20:22:51.840125       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1126 20:22:51.853764       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1126 20:22:51.862679       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1126 20:22:51.916260       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.138.90"}
	I1126 20:22:51.932531       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.79.147"}
	I1126 20:22:52.258163       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1126 20:22:55.050561       1 controller.go:667] quota admission added evaluator for: endpoints
	I1126 20:22:55.098642       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1126 20:22:55.250052       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [cc32d8959eb0604113131904bc81d9b470aeaecc3fe9ac30a6213641dcb226f1] <==
	I1126 20:22:54.745950       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1126 20:22:54.745986       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1126 20:22:54.746122       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1126 20:22:54.746148       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1126 20:22:54.746357       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-297942"
	I1126 20:22:54.746392       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1126 20:22:54.746426       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1126 20:22:54.746587       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1126 20:22:54.747538       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1126 20:22:54.747630       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1126 20:22:54.748694       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1126 20:22:54.750669       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1126 20:22:54.751736       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1126 20:22:54.751767       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1126 20:22:54.751774       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1126 20:22:54.751825       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1126 20:22:54.751836       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1126 20:22:54.751842       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1126 20:22:54.756996       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1126 20:22:54.761289       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1126 20:22:54.763342       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1126 20:22:54.764483       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1126 20:22:54.769671       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1126 20:22:54.772932       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1126 20:22:54.777570       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	
	
	==> kube-proxy [79a3e043f4e7140d7b0ca23ced001e62aaef6584f996e79c9766818d3dc88c0c] <==
	I1126 20:22:52.571934       1 server_linux.go:53] "Using iptables proxy"
	I1126 20:22:52.627564       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1126 20:22:52.728663       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1126 20:22:52.728702       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1126 20:22:52.728806       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1126 20:22:52.746982       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1126 20:22:52.747041       1 server_linux.go:132] "Using iptables Proxier"
	I1126 20:22:52.753213       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1126 20:22:52.753765       1 server.go:527] "Version info" version="v1.34.1"
	I1126 20:22:52.753853       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 20:22:52.757671       1 config.go:200] "Starting service config controller"
	I1126 20:22:52.757741       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1126 20:22:52.757699       1 config.go:403] "Starting serviceCIDR config controller"
	I1126 20:22:52.757801       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1126 20:22:52.757714       1 config.go:106] "Starting endpoint slice config controller"
	I1126 20:22:52.757867       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1126 20:22:52.757875       1 config.go:309] "Starting node config controller"
	I1126 20:22:52.757944       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1126 20:22:52.757968       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1126 20:22:52.857889       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1126 20:22:52.857921       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1126 20:22:52.857889       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [db4659bb8554145bfacd475b919e39a9767e55ff868938d2d744b53d7838507e] <==
	I1126 20:22:50.383534       1 serving.go:386] Generated self-signed cert in-memory
	W1126 20:22:51.277120       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1126 20:22:51.277158       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1126 20:22:51.277170       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1126 20:22:51.277181       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1126 20:22:51.347574       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1126 20:22:51.347622       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 20:22:51.351700       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1126 20:22:51.351785       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1126 20:22:51.353020       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1126 20:22:51.353113       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1126 20:22:51.452001       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 26 20:22:51 newest-cni-297942 kubelet[664]: E1126 20:22:51.972912     664 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-297942\" already exists" pod="kube-system/kube-apiserver-newest-cni-297942"
	Nov 26 20:22:51 newest-cni-297942 kubelet[664]: I1126 20:22:51.972949     664 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-297942"
	Nov 26 20:22:51 newest-cni-297942 kubelet[664]: E1126 20:22:51.980296     664 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-297942\" already exists" pod="kube-system/kube-controller-manager-newest-cni-297942"
	Nov 26 20:22:51 newest-cni-297942 kubelet[664]: I1126 20:22:51.980332     664 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-297942"
	Nov 26 20:22:51 newest-cni-297942 kubelet[664]: E1126 20:22:51.989903     664 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-297942\" already exists" pod="kube-system/kube-scheduler-newest-cni-297942"
	Nov 26 20:22:51 newest-cni-297942 kubelet[664]: I1126 20:22:51.990531     664 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-297942"
	Nov 26 20:22:51 newest-cni-297942 kubelet[664]: E1126 20:22:51.997013     664 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-297942\" already exists" pod="kube-system/etcd-newest-cni-297942"
	Nov 26 20:22:52 newest-cni-297942 kubelet[664]: I1126 20:22:52.157123     664 apiserver.go:52] "Watching apiserver"
	Nov 26 20:22:52 newest-cni-297942 kubelet[664]: I1126 20:22:52.162071     664 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 26 20:22:52 newest-cni-297942 kubelet[664]: I1126 20:22:52.202607     664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a6a459a7-87d9-4628-ad09-7e6e8d8445da-xtables-lock\") pod \"kindnet-wlhp7\" (UID: \"a6a459a7-87d9-4628-ad09-7e6e8d8445da\") " pod="kube-system/kindnet-wlhp7"
	Nov 26 20:22:52 newest-cni-297942 kubelet[664]: I1126 20:22:52.202686     664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6e8b3fed-5b44-42fd-9259-f59bfc5b1f0c-xtables-lock\") pod \"kube-proxy-lx6vw\" (UID: \"6e8b3fed-5b44-42fd-9259-f59bfc5b1f0c\") " pod="kube-system/kube-proxy-lx6vw"
	Nov 26 20:22:52 newest-cni-297942 kubelet[664]: I1126 20:22:52.202744     664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e8b3fed-5b44-42fd-9259-f59bfc5b1f0c-lib-modules\") pod \"kube-proxy-lx6vw\" (UID: \"6e8b3fed-5b44-42fd-9259-f59bfc5b1f0c\") " pod="kube-system/kube-proxy-lx6vw"
	Nov 26 20:22:52 newest-cni-297942 kubelet[664]: I1126 20:22:52.202786     664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/a6a459a7-87d9-4628-ad09-7e6e8d8445da-cni-cfg\") pod \"kindnet-wlhp7\" (UID: \"a6a459a7-87d9-4628-ad09-7e6e8d8445da\") " pod="kube-system/kindnet-wlhp7"
	Nov 26 20:22:52 newest-cni-297942 kubelet[664]: I1126 20:22:52.202807     664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a6a459a7-87d9-4628-ad09-7e6e8d8445da-lib-modules\") pod \"kindnet-wlhp7\" (UID: \"a6a459a7-87d9-4628-ad09-7e6e8d8445da\") " pod="kube-system/kindnet-wlhp7"
	Nov 26 20:22:52 newest-cni-297942 kubelet[664]: I1126 20:22:52.243048     664 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-297942"
	Nov 26 20:22:52 newest-cni-297942 kubelet[664]: I1126 20:22:52.243432     664 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-297942"
	Nov 26 20:22:52 newest-cni-297942 kubelet[664]: I1126 20:22:52.243759     664 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-297942"
	Nov 26 20:22:52 newest-cni-297942 kubelet[664]: I1126 20:22:52.244223     664 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-297942"
	Nov 26 20:22:52 newest-cni-297942 kubelet[664]: E1126 20:22:52.258984     664 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-297942\" already exists" pod="kube-system/kube-controller-manager-newest-cni-297942"
	Nov 26 20:22:52 newest-cni-297942 kubelet[664]: E1126 20:22:52.259591     664 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-297942\" already exists" pod="kube-system/kube-scheduler-newest-cni-297942"
	Nov 26 20:22:52 newest-cni-297942 kubelet[664]: E1126 20:22:52.259897     664 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-297942\" already exists" pod="kube-system/etcd-newest-cni-297942"
	Nov 26 20:22:52 newest-cni-297942 kubelet[664]: E1126 20:22:52.260077     664 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-297942\" already exists" pod="kube-system/kube-apiserver-newest-cni-297942"
	Nov 26 20:22:54 newest-cni-297942 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 26 20:22:54 newest-cni-297942 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 26 20:22:54 newest-cni-297942 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-297942 -n newest-cni-297942
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-297942 -n newest-cni-297942: exit status 2 (343.133445ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-297942 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-bnszr storage-provisioner dashboard-metrics-scraper-6ffb444bf9-hw5ql kubernetes-dashboard-855c9754f9-92dlp
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-297942 describe pod coredns-66bc5c9577-bnszr storage-provisioner dashboard-metrics-scraper-6ffb444bf9-hw5ql kubernetes-dashboard-855c9754f9-92dlp
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-297942 describe pod coredns-66bc5c9577-bnszr storage-provisioner dashboard-metrics-scraper-6ffb444bf9-hw5ql kubernetes-dashboard-855c9754f9-92dlp: exit status 1 (61.599624ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-bnszr" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-hw5ql" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-92dlp" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-297942 describe pod coredns-66bc5c9577-bnszr storage-provisioner dashboard-metrics-scraper-6ffb444bf9-hw5ql kubernetes-dashboard-855c9754f9-92dlp: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-297942
helpers_test.go:243: (dbg) docker inspect newest-cni-297942:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "40b9f3c5f1a3b0585255460117f18a9671b74c031d814a5253a12e48d3850584",
	        "Created": "2025-11-26T20:22:12.162948812Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 283328,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-26T20:22:41.158451192Z",
	            "FinishedAt": "2025-11-26T20:22:40.172025239Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/40b9f3c5f1a3b0585255460117f18a9671b74c031d814a5253a12e48d3850584/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/40b9f3c5f1a3b0585255460117f18a9671b74c031d814a5253a12e48d3850584/hostname",
	        "HostsPath": "/var/lib/docker/containers/40b9f3c5f1a3b0585255460117f18a9671b74c031d814a5253a12e48d3850584/hosts",
	        "LogPath": "/var/lib/docker/containers/40b9f3c5f1a3b0585255460117f18a9671b74c031d814a5253a12e48d3850584/40b9f3c5f1a3b0585255460117f18a9671b74c031d814a5253a12e48d3850584-json.log",
	        "Name": "/newest-cni-297942",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-297942:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-297942",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "40b9f3c5f1a3b0585255460117f18a9671b74c031d814a5253a12e48d3850584",
	                "LowerDir": "/var/lib/docker/overlay2/417273ddd8a42fbe19a864fe35ffc1bc9de0f153b520af6708ccf3f376e863fe-init/diff:/var/lib/docker/overlay2/1661d775281cb7314a654558ee2c4e5880ffafb3a27a085032449cd60a68753a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/417273ddd8a42fbe19a864fe35ffc1bc9de0f153b520af6708ccf3f376e863fe/merged",
	                "UpperDir": "/var/lib/docker/overlay2/417273ddd8a42fbe19a864fe35ffc1bc9de0f153b520af6708ccf3f376e863fe/diff",
	                "WorkDir": "/var/lib/docker/overlay2/417273ddd8a42fbe19a864fe35ffc1bc9de0f153b520af6708ccf3f376e863fe/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-297942",
	                "Source": "/var/lib/docker/volumes/newest-cni-297942/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-297942",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-297942",
	                "name.minikube.sigs.k8s.io": "newest-cni-297942",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "2f35fe2a7e03d4ae53c80197232da4df6f428bcc28c758f21a90153fda12f531",
	            "SandboxKey": "/var/run/docker/netns/2f35fe2a7e03",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-297942": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7a8acc179efb582b4f8ab1f8758542f842892d2dd2928aade1bbb97827e2c1af",
	                    "EndpointID": "1d80a818a66d5158851c96bfc37538fcb57e5fb123d59c70cd4517f824513591",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "aa:03:05:b0:ae:f3",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-297942",
	                        "40b9f3c5f1a3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-297942 -n newest-cni-297942
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-297942 -n newest-cni-297942: exit status 2 (330.86359ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-297942 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p cert-expiration-571738                                                                                                                                                                                                                     │ cert-expiration-571738       │ jenkins │ v1.37.0 │ 26 Nov 25 20:21 UTC │ 26 Nov 25 20:21 UTC │
	│ start   │ -p embed-certs-949294 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-949294           │ jenkins │ v1.37.0 │ 26 Nov 25 20:21 UTC │ 26 Nov 25 20:22 UTC │
	│ start   │ -p kubernetes-upgrade-225144 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-225144    │ jenkins │ v1.37.0 │ 26 Nov 25 20:21 UTC │                     │
	│ start   │ -p kubernetes-upgrade-225144 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-225144    │ jenkins │ v1.37.0 │ 26 Nov 25 20:21 UTC │ 26 Nov 25 20:22 UTC │
	│ delete  │ -p stopped-upgrade-211103                                                                                                                                                                                                                     │ stopped-upgrade-211103       │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ delete  │ -p kubernetes-upgrade-225144                                                                                                                                                                                                                  │ kubernetes-upgrade-225144    │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ delete  │ -p disable-driver-mounts-221304                                                                                                                                                                                                               │ disable-driver-mounts-221304 │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ start   │ -p default-k8s-diff-port-178152 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-178152 │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ start   │ -p newest-cni-297942 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-297942            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ addons  │ enable metrics-server -p no-preload-026579 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-026579            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │                     │
	│ stop    │ -p no-preload-026579 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-026579            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ addons  │ enable metrics-server -p embed-certs-949294 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-949294           │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │                     │
	│ stop    │ -p embed-certs-949294 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-949294           │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ addons  │ enable dashboard -p no-preload-026579 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-026579            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ start   │ -p no-preload-026579 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-026579            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-297942 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-297942            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-949294 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-949294           │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ start   │ -p embed-certs-949294 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-949294           │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │                     │
	│ stop    │ -p newest-cni-297942 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-297942            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ addons  │ enable dashboard -p newest-cni-297942 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-297942            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ start   │ -p newest-cni-297942 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-297942            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-178152 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-178152 │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │                     │
	│ image   │ newest-cni-297942 image list --format=json                                                                                                                                                                                                    │ newest-cni-297942            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ pause   │ -p newest-cni-297942 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-297942            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-178152 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-178152 │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/26 20:22:40
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1126 20:22:40.884483  283132 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:22:40.884766  283132 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:22:40.884776  283132 out.go:374] Setting ErrFile to fd 2...
	I1126 20:22:40.884785  283132 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:22:40.884987  283132 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-10722/.minikube/bin
	I1126 20:22:40.885440  283132 out.go:368] Setting JSON to false
	I1126 20:22:40.886566  283132 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3911,"bootTime":1764184650,"procs":316,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1126 20:22:40.886632  283132 start.go:143] virtualization: kvm guest
	I1126 20:22:40.888379  283132 out.go:179] * [newest-cni-297942] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1126 20:22:40.889473  283132 out.go:179]   - MINIKUBE_LOCATION=21974
	I1126 20:22:40.889503  283132 notify.go:221] Checking for updates...
	I1126 20:22:40.892833  283132 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1126 20:22:40.894800  283132 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21974-10722/kubeconfig
	I1126 20:22:40.896376  283132 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-10722/.minikube
	I1126 20:22:40.897743  283132 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1126 20:22:40.898713  283132 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1126 20:22:40.900231  283132 config.go:182] Loaded profile config "newest-cni-297942": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:22:40.900958  283132 driver.go:422] Setting default libvirt URI to qemu:///system
	I1126 20:22:40.928114  283132 docker.go:124] docker version: linux-29.0.4:Docker Engine - Community
	I1126 20:22:40.928202  283132 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:22:41.015656  283132 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:74 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-26 20:22:41.003539781 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1126 20:22:41.015804  283132 docker.go:319] overlay module found
	I1126 20:22:41.016948  283132 out.go:179] * Using the docker driver based on existing profile
	I1126 20:22:41.017883  283132 start.go:309] selected driver: docker
	I1126 20:22:41.017898  283132 start.go:927] validating driver "docker" against &{Name:newest-cni-297942 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-297942 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:22:41.018002  283132 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1126 20:22:41.018724  283132 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:22:41.084121  283132 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:74 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-26 20:22:41.072667777 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1126 20:22:41.084507  283132 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1126 20:22:41.084546  283132 cni.go:84] Creating CNI manager for ""
	I1126 20:22:41.084623  283132 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:22:41.084677  283132 start.go:353] cluster config:
	{Name:newest-cni-297942 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-297942 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:22:41.086652  283132 out.go:179] * Starting "newest-cni-297942" primary control-plane node in "newest-cni-297942" cluster
	I1126 20:22:41.087583  283132 cache.go:134] Beginning downloading kic base image for docker with crio
	I1126 20:22:41.088592  283132 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1126 20:22:41.089520  283132 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:22:41.089554  283132 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21974-10722/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1126 20:22:41.089569  283132 cache.go:65] Caching tarball of preloaded images
	I1126 20:22:41.089623  283132 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1126 20:22:41.089678  283132 preload.go:238] Found /home/jenkins/minikube-integration/21974-10722/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1126 20:22:41.089692  283132 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1126 20:22:41.089796  283132 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/newest-cni-297942/config.json ...
	I1126 20:22:41.111178  283132 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1126 20:22:41.111197  283132 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1126 20:22:41.111211  283132 cache.go:243] Successfully downloaded all kic artifacts
	I1126 20:22:41.111242  283132 start.go:360] acquireMachinesLock for newest-cni-297942: {Name:mkec4aea2213ece57272965b7ad56143d17ef93a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1126 20:22:41.111305  283132 start.go:364] duration metric: took 40.156µs to acquireMachinesLock for "newest-cni-297942"
	I1126 20:22:41.111323  283132 start.go:96] Skipping create...Using existing machine configuration
	I1126 20:22:41.111333  283132 fix.go:54] fixHost starting: 
	I1126 20:22:41.111591  283132 cli_runner.go:164] Run: docker container inspect newest-cni-297942 --format={{.State.Status}}
	I1126 20:22:41.129559  283132 fix.go:112] recreateIfNeeded on newest-cni-297942: state=Stopped err=<nil>
	W1126 20:22:41.129580  283132 fix.go:138] unexpected machine state, will restart: <nil>
	I1126 20:22:39.153389  279050 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:22:39.153408  279050 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1126 20:22:39.153478  279050 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-026579
	I1126 20:22:39.179591  279050 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/no-preload-026579/id_rsa Username:docker}
	I1126 20:22:39.180647  279050 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1126 20:22:39.180665  279050 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1126 20:22:39.180721  279050 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-026579
	I1126 20:22:39.186501  279050 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/no-preload-026579/id_rsa Username:docker}
	I1126 20:22:39.209307  279050 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/no-preload-026579/id_rsa Username:docker}
	I1126 20:22:39.273960  279050 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:22:39.287389  279050 node_ready.go:35] waiting up to 6m0s for node "no-preload-026579" to be "Ready" ...
	I1126 20:22:39.298799  279050 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:22:39.300737  279050 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1126 20:22:39.300753  279050 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1126 20:22:39.315410  279050 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1126 20:22:39.315430  279050 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1126 20:22:39.325338  279050 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1126 20:22:39.331515  279050 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1126 20:22:39.331534  279050 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1126 20:22:39.348174  279050 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1126 20:22:39.348194  279050 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1126 20:22:39.369916  279050 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1126 20:22:39.369951  279050 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1126 20:22:39.385646  279050 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1126 20:22:39.385669  279050 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1126 20:22:39.400976  279050 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1126 20:22:39.401000  279050 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1126 20:22:39.416714  279050 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1126 20:22:39.416732  279050 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1126 20:22:39.433038  279050 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1126 20:22:39.433061  279050 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1126 20:22:39.447449  279050 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1126 20:22:40.587082  279050 node_ready.go:49] node "no-preload-026579" is "Ready"
	I1126 20:22:40.587113  279050 node_ready.go:38] duration metric: took 1.299680318s for node "no-preload-026579" to be "Ready" ...
	I1126 20:22:40.587129  279050 api_server.go:52] waiting for apiserver process to appear ...
	I1126 20:22:40.587180  279050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:22:41.152424  279050 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.853595012s)
	I1126 20:22:41.152577  279050 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.827200281s)
	I1126 20:22:41.152686  279050 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.705180849s)
	I1126 20:22:41.152711  279050 api_server.go:72] duration metric: took 2.029918005s to wait for apiserver process to appear ...
	I1126 20:22:41.152721  279050 api_server.go:88] waiting for apiserver healthz status ...
	I1126 20:22:41.152742  279050 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1126 20:22:41.156567  279050 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-026579 addons enable metrics-server
	
	I1126 20:22:41.157768  279050 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1126 20:22:41.157789  279050 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1126 20:22:41.157819  279050 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1126 20:22:41.159540  279050 addons.go:530] duration metric: took 2.036715336s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1126 20:22:41.653489  279050 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1126 20:22:41.658910  279050 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1126 20:22:41.658967  279050 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1126 20:22:37.813684  281230 out.go:252] * Restarting existing docker container for "embed-certs-949294" ...
	I1126 20:22:37.813768  281230 cli_runner.go:164] Run: docker start embed-certs-949294
	I1126 20:22:38.131293  281230 cli_runner.go:164] Run: docker container inspect embed-certs-949294 --format={{.State.Status}}
	I1126 20:22:38.152794  281230 kic.go:430] container "embed-certs-949294" state is running.
	I1126 20:22:38.153224  281230 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-949294
	I1126 20:22:38.175166  281230 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/embed-certs-949294/config.json ...
	I1126 20:22:38.175388  281230 machine.go:94] provisionDockerMachine start ...
	I1126 20:22:38.175448  281230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-949294
	I1126 20:22:38.196588  281230 main.go:143] libmachine: Using SSH client type: native
	I1126 20:22:38.196809  281230 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1126 20:22:38.196819  281230 main.go:143] libmachine: About to run SSH command:
	hostname
	I1126 20:22:38.197513  281230 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60668->127.0.0.1:33088: read: connection reset by peer
	I1126 20:22:41.353546  281230 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-949294
	
	I1126 20:22:41.353574  281230 ubuntu.go:182] provisioning hostname "embed-certs-949294"
	I1126 20:22:41.353632  281230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-949294
	I1126 20:22:41.371710  281230 main.go:143] libmachine: Using SSH client type: native
	I1126 20:22:41.371940  281230 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1126 20:22:41.371965  281230 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-949294 && echo "embed-certs-949294" | sudo tee /etc/hostname
	I1126 20:22:41.527011  281230 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-949294
	
	I1126 20:22:41.527082  281230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-949294
	I1126 20:22:41.552128  281230 main.go:143] libmachine: Using SSH client type: native
	I1126 20:22:41.552497  281230 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1126 20:22:41.552529  281230 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-949294' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-949294/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-949294' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1126 20:22:41.706552  281230 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1126 20:22:41.706582  281230 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21974-10722/.minikube CaCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21974-10722/.minikube}
	I1126 20:22:41.706605  281230 ubuntu.go:190] setting up certificates
	I1126 20:22:41.706617  281230 provision.go:84] configureAuth start
	I1126 20:22:41.706674  281230 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-949294
	I1126 20:22:41.731291  281230 provision.go:143] copyHostCerts
	I1126 20:22:41.731358  281230 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-10722/.minikube/key.pem, removing ...
	I1126 20:22:41.731373  281230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-10722/.minikube/key.pem
	I1126 20:22:41.731452  281230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/key.pem (1675 bytes)
	I1126 20:22:41.731672  281230 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-10722/.minikube/ca.pem, removing ...
	I1126 20:22:41.731683  281230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-10722/.minikube/ca.pem
	I1126 20:22:41.731717  281230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/ca.pem (1078 bytes)
	I1126 20:22:41.731789  281230 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-10722/.minikube/cert.pem, removing ...
	I1126 20:22:41.731798  281230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-10722/.minikube/cert.pem
	I1126 20:22:41.731833  281230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/cert.pem (1123 bytes)
	I1126 20:22:41.731947  281230 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem org=jenkins.embed-certs-949294 san=[127.0.0.1 192.168.94.2 embed-certs-949294 localhost minikube]
	I1126 20:22:41.778215  281230 provision.go:177] copyRemoteCerts
	I1126 20:22:41.778266  281230 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1126 20:22:41.778295  281230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-949294
	I1126 20:22:41.797553  281230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/embed-certs-949294/id_rsa Username:docker}
	I1126 20:22:41.908508  281230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1126 20:22:41.927584  281230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1126 20:22:41.944361  281230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1126 20:22:41.960987  281230 provision.go:87] duration metric: took 254.359611ms to configureAuth
	I1126 20:22:41.961014  281230 ubuntu.go:206] setting minikube options for container-runtime
	I1126 20:22:41.961161  281230 config.go:182] Loaded profile config "embed-certs-949294": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:22:41.961244  281230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-949294
	I1126 20:22:41.979703  281230 main.go:143] libmachine: Using SSH client type: native
	I1126 20:22:41.980006  281230 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1126 20:22:41.980032  281230 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1126 20:22:42.318188  281230 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1126 20:22:42.318211  281230 machine.go:97] duration metric: took 4.142808387s to provisionDockerMachine
	I1126 20:22:42.318225  281230 start.go:293] postStartSetup for "embed-certs-949294" (driver="docker")
	I1126 20:22:42.318237  281230 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1126 20:22:42.318297  281230 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1126 20:22:42.318364  281230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-949294
	I1126 20:22:42.338327  281230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/embed-certs-949294/id_rsa Username:docker}
	I1126 20:22:42.438215  281230 ssh_runner.go:195] Run: cat /etc/os-release
	I1126 20:22:42.441404  281230 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1126 20:22:42.441434  281230 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1126 20:22:42.441446  281230 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-10722/.minikube/addons for local assets ...
	I1126 20:22:42.441539  281230 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-10722/.minikube/files for local assets ...
	I1126 20:22:42.441610  281230 filesync.go:149] local asset: /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem -> 142582.pem in /etc/ssl/certs
	I1126 20:22:42.441700  281230 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1126 20:22:42.448842  281230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem --> /etc/ssl/certs/142582.pem (1708 bytes)
	I1126 20:22:42.465662  281230 start.go:296] duration metric: took 147.425996ms for postStartSetup
	I1126 20:22:42.465729  281230 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1126 20:22:42.465774  281230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-949294
	I1126 20:22:42.483672  281230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/embed-certs-949294/id_rsa Username:docker}
	I1126 20:22:42.582571  281230 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1126 20:22:42.589248  281230 fix.go:56] duration metric: took 4.801612317s for fixHost
	I1126 20:22:42.589282  281230 start.go:83] releasing machines lock for "embed-certs-949294", held for 4.801666542s
	I1126 20:22:42.589356  281230 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-949294
	I1126 20:22:42.613599  281230 ssh_runner.go:195] Run: cat /version.json
	I1126 20:22:42.613635  281230 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1126 20:22:42.613653  281230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-949294
	I1126 20:22:42.613694  281230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-949294
	I1126 20:22:42.640998  281230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/embed-certs-949294/id_rsa Username:docker}
	I1126 20:22:42.641470  281230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/embed-certs-949294/id_rsa Username:docker}
	I1126 20:22:42.742494  281230 ssh_runner.go:195] Run: systemctl --version
	I1126 20:22:42.794845  281230 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1126 20:22:42.828506  281230 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1126 20:22:42.833001  281230 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1126 20:22:42.833081  281230 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1126 20:22:42.840611  281230 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1126 20:22:42.840633  281230 start.go:496] detecting cgroup driver to use...
	I1126 20:22:42.840662  281230 detect.go:190] detected "systemd" cgroup driver on host os
	I1126 20:22:42.840704  281230 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1126 20:22:42.854304  281230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1126 20:22:42.865621  281230 docker.go:218] disabling cri-docker service (if available) ...
	I1126 20:22:42.865663  281230 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1126 20:22:42.879121  281230 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1126 20:22:42.890217  281230 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1126 20:22:42.972124  281230 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1126 20:22:43.054010  281230 docker.go:234] disabling docker service ...
	I1126 20:22:43.054076  281230 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1126 20:22:43.067236  281230 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1126 20:22:43.079079  281230 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1126 20:22:43.158407  281230 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1126 20:22:43.236403  281230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1126 20:22:43.249898  281230 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1126 20:22:43.266098  281230 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1126 20:22:43.266169  281230 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:22:43.275593  281230 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1126 20:22:43.275650  281230 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:22:43.286305  281230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:22:43.295428  281230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:22:43.304196  281230 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1126 20:22:43.312078  281230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:22:43.320105  281230 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:22:43.328187  281230 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:22:43.336849  281230 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1126 20:22:43.344213  281230 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1126 20:22:43.351591  281230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:22:43.434081  281230 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1126 20:22:43.584410  281230 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1126 20:22:43.584499  281230 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1126 20:22:43.588269  281230 start.go:564] Will wait 60s for crictl version
	I1126 20:22:43.588336  281230 ssh_runner.go:195] Run: which crictl
	I1126 20:22:43.591767  281230 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1126 20:22:43.614952  281230 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1126 20:22:43.615025  281230 ssh_runner.go:195] Run: crio --version
	I1126 20:22:43.641356  281230 ssh_runner.go:195] Run: crio --version
	I1126 20:22:43.667903  281230 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	W1126 20:22:39.746749  271308 node_ready.go:57] node "default-k8s-diff-port-178152" has "Ready":"False" status (will retry)
	I1126 20:22:42.249658  271308 node_ready.go:49] node "default-k8s-diff-port-178152" is "Ready"
	I1126 20:22:42.250190  271308 node_ready.go:38] duration metric: took 11.006799541s for node "default-k8s-diff-port-178152" to be "Ready" ...
	I1126 20:22:42.250224  271308 api_server.go:52] waiting for apiserver process to appear ...
	I1126 20:22:42.250294  271308 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:22:42.272956  271308 api_server.go:72] duration metric: took 11.381347219s to wait for apiserver process to appear ...
	I1126 20:22:42.272984  271308 api_server.go:88] waiting for apiserver healthz status ...
	I1126 20:22:42.273006  271308 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1126 20:22:42.279175  271308 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1126 20:22:42.280247  271308 api_server.go:141] control plane version: v1.34.1
	I1126 20:22:42.280267  271308 api_server.go:131] duration metric: took 7.276294ms to wait for apiserver health ...
	I1126 20:22:42.280275  271308 system_pods.go:43] waiting for kube-system pods to appear ...
	I1126 20:22:42.283222  271308 system_pods.go:59] 8 kube-system pods found
	I1126 20:22:42.283253  271308 system_pods.go:61] "coredns-66bc5c9577-tpmmm" [20166f90-76ba-4092-aab9-29683f4fc146] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:22:42.283261  271308 system_pods.go:61] "etcd-default-k8s-diff-port-178152" [b1b7fbf2-3a62-4441-9f2a-1fc512882320] Running
	I1126 20:22:42.283266  271308 system_pods.go:61] "kindnet-bmzz2" [ad4ae092-70cd-48b0-9099-854ccce3329d] Running
	I1126 20:22:42.283269  271308 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-178152" [57a5fbc3-1a74-4cd9-af68-a9cb4053ee40] Running
	I1126 20:22:42.283273  271308 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-178152" [41dc1851-5cc2-414e-8ef8-b25e098649c0] Running
	I1126 20:22:42.283280  271308 system_pods.go:61] "kube-proxy-vd7fp" [37371ddf-6fde-4f46-a877-97f4112ff1b2] Running
	I1126 20:22:42.283283  271308 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-178152" [f1a6a26d-5146-4ffc-9bb3-20349686988f] Running
	I1126 20:22:42.283288  271308 system_pods.go:61] "storage-provisioner" [0ed42547-a316-4970-b4e7-f2157c68ac06] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 20:22:42.283293  271308 system_pods.go:74] duration metric: took 3.013459ms to wait for pod list to return data ...
	I1126 20:22:42.283303  271308 default_sa.go:34] waiting for default service account to be created ...
	I1126 20:22:42.285341  271308 default_sa.go:45] found service account: "default"
	I1126 20:22:42.285361  271308 default_sa.go:55] duration metric: took 2.052746ms for default service account to be created ...
	I1126 20:22:42.285368  271308 system_pods.go:116] waiting for k8s-apps to be running ...
	I1126 20:22:42.287817  271308 system_pods.go:86] 8 kube-system pods found
	I1126 20:22:42.287844  271308 system_pods.go:89] "coredns-66bc5c9577-tpmmm" [20166f90-76ba-4092-aab9-29683f4fc146] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:22:42.287851  271308 system_pods.go:89] "etcd-default-k8s-diff-port-178152" [b1b7fbf2-3a62-4441-9f2a-1fc512882320] Running
	I1126 20:22:42.287871  271308 system_pods.go:89] "kindnet-bmzz2" [ad4ae092-70cd-48b0-9099-854ccce3329d] Running
	I1126 20:22:42.287878  271308 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-178152" [57a5fbc3-1a74-4cd9-af68-a9cb4053ee40] Running
	I1126 20:22:42.287906  271308 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-178152" [41dc1851-5cc2-414e-8ef8-b25e098649c0] Running
	I1126 20:22:42.287912  271308 system_pods.go:89] "kube-proxy-vd7fp" [37371ddf-6fde-4f46-a877-97f4112ff1b2] Running
	I1126 20:22:42.287918  271308 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-178152" [f1a6a26d-5146-4ffc-9bb3-20349686988f] Running
	I1126 20:22:42.287927  271308 system_pods.go:89] "storage-provisioner" [0ed42547-a316-4970-b4e7-f2157c68ac06] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 20:22:42.287958  271308 retry.go:31] will retry after 308.61666ms: missing components: kube-dns
	I1126 20:22:42.602933  271308 system_pods.go:86] 8 kube-system pods found
	I1126 20:22:42.602960  271308 system_pods.go:89] "coredns-66bc5c9577-tpmmm" [20166f90-76ba-4092-aab9-29683f4fc146] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:22:42.602966  271308 system_pods.go:89] "etcd-default-k8s-diff-port-178152" [b1b7fbf2-3a62-4441-9f2a-1fc512882320] Running
	I1126 20:22:42.602971  271308 system_pods.go:89] "kindnet-bmzz2" [ad4ae092-70cd-48b0-9099-854ccce3329d] Running
	I1126 20:22:42.602975  271308 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-178152" [57a5fbc3-1a74-4cd9-af68-a9cb4053ee40] Running
	I1126 20:22:42.602979  271308 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-178152" [41dc1851-5cc2-414e-8ef8-b25e098649c0] Running
	I1126 20:22:42.602982  271308 system_pods.go:89] "kube-proxy-vd7fp" [37371ddf-6fde-4f46-a877-97f4112ff1b2] Running
	I1126 20:22:42.602985  271308 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-178152" [f1a6a26d-5146-4ffc-9bb3-20349686988f] Running
	I1126 20:22:42.602989  271308 system_pods.go:89] "storage-provisioner" [0ed42547-a316-4970-b4e7-f2157c68ac06] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 20:22:42.603002  271308 retry.go:31] will retry after 352.870646ms: missing components: kube-dns
	I1126 20:22:42.960487  271308 system_pods.go:86] 8 kube-system pods found
	I1126 20:22:42.960513  271308 system_pods.go:89] "coredns-66bc5c9577-tpmmm" [20166f90-76ba-4092-aab9-29683f4fc146] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:22:42.960519  271308 system_pods.go:89] "etcd-default-k8s-diff-port-178152" [b1b7fbf2-3a62-4441-9f2a-1fc512882320] Running
	I1126 20:22:42.960525  271308 system_pods.go:89] "kindnet-bmzz2" [ad4ae092-70cd-48b0-9099-854ccce3329d] Running
	I1126 20:22:42.960532  271308 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-178152" [57a5fbc3-1a74-4cd9-af68-a9cb4053ee40] Running
	I1126 20:22:42.960536  271308 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-178152" [41dc1851-5cc2-414e-8ef8-b25e098649c0] Running
	I1126 20:22:42.960545  271308 system_pods.go:89] "kube-proxy-vd7fp" [37371ddf-6fde-4f46-a877-97f4112ff1b2] Running
	I1126 20:22:42.960550  271308 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-178152" [f1a6a26d-5146-4ffc-9bb3-20349686988f] Running
	I1126 20:22:42.960554  271308 system_pods.go:89] "storage-provisioner" [0ed42547-a316-4970-b4e7-f2157c68ac06] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 20:22:42.960567  271308 retry.go:31] will retry after 370.669224ms: missing components: kube-dns
	I1126 20:22:43.336323  271308 system_pods.go:86] 8 kube-system pods found
	I1126 20:22:43.336368  271308 system_pods.go:89] "coredns-66bc5c9577-tpmmm" [20166f90-76ba-4092-aab9-29683f4fc146] Running
	I1126 20:22:43.336377  271308 system_pods.go:89] "etcd-default-k8s-diff-port-178152" [b1b7fbf2-3a62-4441-9f2a-1fc512882320] Running
	I1126 20:22:43.336384  271308 system_pods.go:89] "kindnet-bmzz2" [ad4ae092-70cd-48b0-9099-854ccce3329d] Running
	I1126 20:22:43.336390  271308 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-178152" [57a5fbc3-1a74-4cd9-af68-a9cb4053ee40] Running
	I1126 20:22:43.336401  271308 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-178152" [41dc1851-5cc2-414e-8ef8-b25e098649c0] Running
	I1126 20:22:43.336406  271308 system_pods.go:89] "kube-proxy-vd7fp" [37371ddf-6fde-4f46-a877-97f4112ff1b2] Running
	I1126 20:22:43.336412  271308 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-178152" [f1a6a26d-5146-4ffc-9bb3-20349686988f] Running
	I1126 20:22:43.336420  271308 system_pods.go:89] "storage-provisioner" [0ed42547-a316-4970-b4e7-f2157c68ac06] Running
	I1126 20:22:43.336429  271308 system_pods.go:126] duration metric: took 1.051054713s to wait for k8s-apps to be running ...
	I1126 20:22:43.336442  271308 system_svc.go:44] waiting for kubelet service to be running ....
	I1126 20:22:43.336492  271308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:22:43.349377  271308 system_svc.go:56] duration metric: took 12.93002ms WaitForService to wait for kubelet
	I1126 20:22:43.349397  271308 kubeadm.go:587] duration metric: took 12.457793394s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1126 20:22:43.349410  271308 node_conditions.go:102] verifying NodePressure condition ...
	I1126 20:22:43.352231  271308 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1126 20:22:43.352254  271308 node_conditions.go:123] node cpu capacity is 8
	I1126 20:22:43.352270  271308 node_conditions.go:105] duration metric: took 2.855748ms to run NodePressure ...
	I1126 20:22:43.352281  271308 start.go:242] waiting for startup goroutines ...
	I1126 20:22:43.352290  271308 start.go:247] waiting for cluster config update ...
	I1126 20:22:43.352299  271308 start.go:256] writing updated cluster config ...
	I1126 20:22:43.352549  271308 ssh_runner.go:195] Run: rm -f paused
	I1126 20:22:43.356029  271308 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 20:22:43.359306  271308 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-tpmmm" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:22:43.363412  271308 pod_ready.go:94] pod "coredns-66bc5c9577-tpmmm" is "Ready"
	I1126 20:22:43.363435  271308 pod_ready.go:86] duration metric: took 4.112055ms for pod "coredns-66bc5c9577-tpmmm" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:22:43.365248  271308 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-178152" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:22:43.368843  271308 pod_ready.go:94] pod "etcd-default-k8s-diff-port-178152" is "Ready"
	I1126 20:22:43.368862  271308 pod_ready.go:86] duration metric: took 3.598035ms for pod "etcd-default-k8s-diff-port-178152" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:22:43.370559  271308 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-178152" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:22:43.373917  271308 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-178152" is "Ready"
	I1126 20:22:43.373937  271308 pod_ready.go:86] duration metric: took 3.359149ms for pod "kube-apiserver-default-k8s-diff-port-178152" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:22:43.375639  271308 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-178152" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:22:43.760756  271308 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-178152" is "Ready"
	I1126 20:22:43.760788  271308 pod_ready.go:86] duration metric: took 385.124259ms for pod "kube-controller-manager-default-k8s-diff-port-178152" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:22:43.960061  271308 pod_ready.go:83] waiting for pod "kube-proxy-vd7fp" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:22:44.359897  271308 pod_ready.go:94] pod "kube-proxy-vd7fp" is "Ready"
	I1126 20:22:44.359924  271308 pod_ready.go:86] duration metric: took 399.838276ms for pod "kube-proxy-vd7fp" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:22:44.560435  271308 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-178152" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:22:43.668973  281230 cli_runner.go:164] Run: docker network inspect embed-certs-949294 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1126 20:22:43.686898  281230 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1126 20:22:43.690943  281230 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:22:43.701122  281230 kubeadm.go:884] updating cluster {Name:embed-certs-949294 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-949294 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1126 20:22:43.701233  281230 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:22:43.701286  281230 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:22:43.733576  281230 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:22:43.733598  281230 crio.go:433] Images already preloaded, skipping extraction
	I1126 20:22:43.733638  281230 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:22:43.757784  281230 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:22:43.757801  281230 cache_images.go:86] Images are preloaded, skipping loading
	I1126 20:22:43.757809  281230 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1126 20:22:43.757903  281230 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-949294 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-949294 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1126 20:22:43.757958  281230 ssh_runner.go:195] Run: crio config
	I1126 20:22:43.801014  281230 cni.go:84] Creating CNI manager for ""
	I1126 20:22:43.801042  281230 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:22:43.801062  281230 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1126 20:22:43.801091  281230 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-949294 NodeName:embed-certs-949294 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1126 20:22:43.801281  281230 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-949294"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1126 20:22:43.801354  281230 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1126 20:22:43.809139  281230 binaries.go:51] Found k8s binaries, skipping transfer
	I1126 20:22:43.809185  281230 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1126 20:22:43.816443  281230 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1126 20:22:43.828647  281230 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1126 20:22:43.842109  281230 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1126 20:22:43.853618  281230 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1126 20:22:43.856940  281230 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:22:43.866100  281230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:22:43.974877  281230 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:22:43.999946  281230 certs.go:69] Setting up /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/embed-certs-949294 for IP: 192.168.94.2
	I1126 20:22:43.999968  281230 certs.go:195] generating shared ca certs ...
	I1126 20:22:43.999990  281230 certs.go:227] acquiring lock for ca certs: {Name:mk4795cc985d88cb969cdd6a9c35d3c72f02dfea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:22:44.000162  281230 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21974-10722/.minikube/ca.key
	I1126 20:22:44.000228  281230 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.key
	I1126 20:22:44.000242  281230 certs.go:257] generating profile certs ...
	I1126 20:22:44.000348  281230 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/embed-certs-949294/client.key
	I1126 20:22:44.000422  281230 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/embed-certs-949294/apiserver.key.5bee8ac0
	I1126 20:22:44.000502  281230 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/embed-certs-949294/proxy-client.key
	I1126 20:22:44.000653  281230 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/14258.pem (1338 bytes)
	W1126 20:22:44.000697  281230 certs.go:480] ignoring /home/jenkins/minikube-integration/21974-10722/.minikube/certs/14258_empty.pem, impossibly tiny 0 bytes
	I1126 20:22:44.000711  281230 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem (1679 bytes)
	I1126 20:22:44.000754  281230 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem (1078 bytes)
	I1126 20:22:44.000799  281230 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem (1123 bytes)
	I1126 20:22:44.000834  281230 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem (1675 bytes)
	I1126 20:22:44.000897  281230 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem (1708 bytes)
	I1126 20:22:44.001493  281230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1126 20:22:44.019892  281230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1126 20:22:44.040066  281230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1126 20:22:44.057726  281230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1126 20:22:44.081328  281230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/embed-certs-949294/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1126 20:22:44.098058  281230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/embed-certs-949294/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1126 20:22:44.113934  281230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/embed-certs-949294/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1126 20:22:44.129831  281230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/embed-certs-949294/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1126 20:22:44.145588  281230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem --> /usr/share/ca-certificates/142582.pem (1708 bytes)
	I1126 20:22:44.161958  281230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1126 20:22:44.178404  281230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/certs/14258.pem --> /usr/share/ca-certificates/14258.pem (1338 bytes)
	I1126 20:22:44.195831  281230 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1126 20:22:44.207337  281230 ssh_runner.go:195] Run: openssl version
	I1126 20:22:44.213097  281230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142582.pem && ln -fs /usr/share/ca-certificates/142582.pem /etc/ssl/certs/142582.pem"
	I1126 20:22:44.220687  281230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142582.pem
	I1126 20:22:44.224116  281230 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 26 19:41 /usr/share/ca-certificates/142582.pem
	I1126 20:22:44.224164  281230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142582.pem
	I1126 20:22:44.258977  281230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142582.pem /etc/ssl/certs/3ec20f2e.0"
	I1126 20:22:44.267014  281230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1126 20:22:44.275688  281230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:22:44.279299  281230 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 26 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:22:44.279349  281230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:22:44.314548  281230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1126 20:22:44.322323  281230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14258.pem && ln -fs /usr/share/ca-certificates/14258.pem /etc/ssl/certs/14258.pem"
	I1126 20:22:44.331309  281230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14258.pem
	I1126 20:22:44.334747  281230 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 26 19:41 /usr/share/ca-certificates/14258.pem
	I1126 20:22:44.334792  281230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14258.pem
	I1126 20:22:44.369194  281230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14258.pem /etc/ssl/certs/51391683.0"
	I1126 20:22:44.377304  281230 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1126 20:22:44.381220  281230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1126 20:22:44.417889  281230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1126 20:22:44.454503  281230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1126 20:22:44.491150  281230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1126 20:22:44.542762  281230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1126 20:22:44.589987  281230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1126 20:22:44.653209  281230 kubeadm.go:401] StartCluster: {Name:embed-certs-949294 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-949294 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:22:44.653317  281230 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 20:22:44.653402  281230 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 20:22:44.698166  281230 cri.go:89] found id: "d0edb51e9e5fccff0dc7134b628a507303eb3f0ea693960b2ef07c819ccfcfb3"
	I1126 20:22:44.698189  281230 cri.go:89] found id: "1247796ab1281fb5ee5525e146ff9c03a80b5e36662073b88f7b4335b21630fa"
	I1126 20:22:44.698194  281230 cri.go:89] found id: "f265ea81a09610c82177779173f228ed110d405daaad945e9224abda5afc655e"
	I1126 20:22:44.698199  281230 cri.go:89] found id: "27f1868f6da521ee9bf27fc5eccba3b561b07052f526f759dfbebcc382b682e0"
	I1126 20:22:44.698204  281230 cri.go:89] found id: ""
	I1126 20:22:44.698249  281230 ssh_runner.go:195] Run: sudo runc list -f json
	W1126 20:22:44.712857  281230 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:22:44Z" level=error msg="open /run/runc: no such file or directory"
	I1126 20:22:44.712953  281230 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1126 20:22:44.721110  281230 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1126 20:22:44.721122  281230 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1126 20:22:44.721219  281230 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1126 20:22:44.728115  281230 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1126 20:22:44.728769  281230 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-949294" does not appear in /home/jenkins/minikube-integration/21974-10722/kubeconfig
	I1126 20:22:44.729067  281230 kubeconfig.go:62] /home/jenkins/minikube-integration/21974-10722/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-949294" cluster setting kubeconfig missing "embed-certs-949294" context setting]
	I1126 20:22:44.729783  281230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/kubeconfig: {Name:mk6fc897a3190afd853d8f239c80394df10dbd39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:22:44.731342  281230 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1126 20:22:44.739276  281230 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1126 20:22:44.739300  281230 kubeadm.go:602] duration metric: took 18.174206ms to restartPrimaryControlPlane
	I1126 20:22:44.739307  281230 kubeadm.go:403] duration metric: took 86.108546ms to StartCluster
	I1126 20:22:44.739318  281230 settings.go:142] acquiring lock: {Name:mkeab3f1dbcfb88a16fda32bff13c520a9a811ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:22:44.739377  281230 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21974-10722/kubeconfig
	I1126 20:22:44.740675  281230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/kubeconfig: {Name:mk6fc897a3190afd853d8f239c80394df10dbd39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:22:44.740856  281230 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1126 20:22:44.741084  281230 config.go:182] Loaded profile config "embed-certs-949294": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:22:44.741124  281230 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1126 20:22:44.741179  281230 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-949294"
	I1126 20:22:44.741193  281230 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-949294"
	W1126 20:22:44.741198  281230 addons.go:248] addon storage-provisioner should already be in state true
	I1126 20:22:44.741214  281230 host.go:66] Checking if "embed-certs-949294" exists ...
	I1126 20:22:44.741554  281230 cli_runner.go:164] Run: docker container inspect embed-certs-949294 --format={{.State.Status}}
	I1126 20:22:44.741639  281230 addons.go:70] Setting dashboard=true in profile "embed-certs-949294"
	I1126 20:22:44.741651  281230 addons.go:70] Setting default-storageclass=true in profile "embed-certs-949294"
	I1126 20:22:44.741668  281230 addons.go:239] Setting addon dashboard=true in "embed-certs-949294"
	I1126 20:22:44.741669  281230 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-949294"
	W1126 20:22:44.741678  281230 addons.go:248] addon dashboard should already be in state true
	I1126 20:22:44.741729  281230 host.go:66] Checking if "embed-certs-949294" exists ...
	I1126 20:22:44.741928  281230 cli_runner.go:164] Run: docker container inspect embed-certs-949294 --format={{.State.Status}}
	I1126 20:22:44.742228  281230 cli_runner.go:164] Run: docker container inspect embed-certs-949294 --format={{.State.Status}}
	I1126 20:22:44.742329  281230 out.go:179] * Verifying Kubernetes components...
	I1126 20:22:44.745728  281230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:22:44.769720  281230 addons.go:239] Setting addon default-storageclass=true in "embed-certs-949294"
	W1126 20:22:44.769745  281230 addons.go:248] addon default-storageclass should already be in state true
	I1126 20:22:44.769776  281230 host.go:66] Checking if "embed-certs-949294" exists ...
	I1126 20:22:44.770229  281230 cli_runner.go:164] Run: docker container inspect embed-certs-949294 --format={{.State.Status}}
	I1126 20:22:44.770534  281230 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1126 20:22:44.771603  281230 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1126 20:22:44.771655  281230 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:22:44.771665  281230 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1126 20:22:44.771726  281230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-949294
	I1126 20:22:44.773363  281230 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1126 20:22:44.961735  271308 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-178152" is "Ready"
	I1126 20:22:44.961781  271308 pod_ready.go:86] duration metric: took 401.291943ms for pod "kube-scheduler-default-k8s-diff-port-178152" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:22:44.961797  271308 pod_ready.go:40] duration metric: took 1.605738411s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 20:22:45.024642  271308 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1126 20:22:45.028340  271308 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-178152" cluster and "default" namespace by default
	I1126 20:22:41.130916  283132 out.go:252] * Restarting existing docker container for "newest-cni-297942" ...
	I1126 20:22:41.130973  283132 cli_runner.go:164] Run: docker start newest-cni-297942
	I1126 20:22:41.417598  283132 cli_runner.go:164] Run: docker container inspect newest-cni-297942 --format={{.State.Status}}
	I1126 20:22:41.436343  283132 kic.go:430] container "newest-cni-297942" state is running.
	I1126 20:22:41.436757  283132 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-297942
	I1126 20:22:41.454760  283132 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/newest-cni-297942/config.json ...
	I1126 20:22:41.454963  283132 machine.go:94] provisionDockerMachine start ...
	I1126 20:22:41.455014  283132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-297942
	I1126 20:22:41.473682  283132 main.go:143] libmachine: Using SSH client type: native
	I1126 20:22:41.473897  283132 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1126 20:22:41.473908  283132 main.go:143] libmachine: About to run SSH command:
	hostname
	I1126 20:22:41.474510  283132 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53014->127.0.0.1:33093: read: connection reset by peer
	I1126 20:22:44.628083  283132 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-297942
	
	I1126 20:22:44.628112  283132 ubuntu.go:182] provisioning hostname "newest-cni-297942"
	I1126 20:22:44.628888  283132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-297942
	I1126 20:22:44.654951  283132 main.go:143] libmachine: Using SSH client type: native
	I1126 20:22:44.655280  283132 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1126 20:22:44.655300  283132 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-297942 && echo "newest-cni-297942" | sudo tee /etc/hostname
	I1126 20:22:44.836325  283132 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-297942
	
	I1126 20:22:44.836408  283132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-297942
	I1126 20:22:44.860919  283132 main.go:143] libmachine: Using SSH client type: native
	I1126 20:22:44.861149  283132 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1126 20:22:44.861181  283132 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-297942' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-297942/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-297942' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1126 20:22:45.024750  283132 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1126 20:22:45.024885  283132 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21974-10722/.minikube CaCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21974-10722/.minikube}
	I1126 20:22:45.024931  283132 ubuntu.go:190] setting up certificates
	I1126 20:22:45.025025  283132 provision.go:84] configureAuth start
	I1126 20:22:45.025434  283132 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-297942
	I1126 20:22:45.053790  283132 provision.go:143] copyHostCerts
	I1126 20:22:45.054123  283132 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-10722/.minikube/ca.pem, removing ...
	I1126 20:22:45.054181  283132 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-10722/.minikube/ca.pem
	I1126 20:22:45.054621  283132 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/ca.pem (1078 bytes)
	I1126 20:22:45.054815  283132 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-10722/.minikube/cert.pem, removing ...
	I1126 20:22:45.054941  283132 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-10722/.minikube/cert.pem
	I1126 20:22:45.056077  283132 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/cert.pem (1123 bytes)
	I1126 20:22:45.056254  283132 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-10722/.minikube/key.pem, removing ...
	I1126 20:22:45.056282  283132 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-10722/.minikube/key.pem
	I1126 20:22:45.056373  283132 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/key.pem (1675 bytes)
	I1126 20:22:45.056499  283132 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem org=jenkins.newest-cni-297942 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-297942]
	I1126 20:22:45.148820  283132 provision.go:177] copyRemoteCerts
	I1126 20:22:45.148880  283132 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1126 20:22:45.148938  283132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-297942
	I1126 20:22:45.175942  283132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/newest-cni-297942/id_rsa Username:docker}
	I1126 20:22:45.287084  283132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1126 20:22:45.308935  283132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1126 20:22:45.325992  283132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1126 20:22:45.342613  283132 provision.go:87] duration metric: took 317.575317ms to configureAuth
	I1126 20:22:45.342637  283132 ubuntu.go:206] setting minikube options for container-runtime
	I1126 20:22:45.342828  283132 config.go:182] Loaded profile config "newest-cni-297942": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:22:45.342955  283132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-297942
	I1126 20:22:45.362599  283132 main.go:143] libmachine: Using SSH client type: native
	I1126 20:22:45.362913  283132 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1126 20:22:45.362936  283132 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1126 20:22:45.681202  283132 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1126 20:22:45.681227  283132 machine.go:97] duration metric: took 4.226250286s to provisionDockerMachine
	I1126 20:22:45.681240  283132 start.go:293] postStartSetup for "newest-cni-297942" (driver="docker")
	I1126 20:22:45.681252  283132 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1126 20:22:45.681306  283132 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1126 20:22:45.681356  283132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-297942
	I1126 20:22:45.705211  283132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/newest-cni-297942/id_rsa Username:docker}
	I1126 20:22:45.819521  283132 ssh_runner.go:195] Run: cat /etc/os-release
	I1126 20:22:45.823878  283132 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1126 20:22:45.823902  283132 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1126 20:22:45.823911  283132 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-10722/.minikube/addons for local assets ...
	I1126 20:22:45.823957  283132 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-10722/.minikube/files for local assets ...
	I1126 20:22:45.824019  283132 filesync.go:149] local asset: /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem -> 142582.pem in /etc/ssl/certs
	I1126 20:22:45.824103  283132 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1126 20:22:45.832396  283132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem --> /etc/ssl/certs/142582.pem (1708 bytes)
	I1126 20:22:45.855936  283132 start.go:296] duration metric: took 174.682288ms for postStartSetup
	I1126 20:22:45.856010  283132 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1126 20:22:45.856070  283132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-297942
	I1126 20:22:45.877896  283132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/newest-cni-297942/id_rsa Username:docker}
	I1126 20:22:42.153037  279050 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1126 20:22:42.157427  279050 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1126 20:22:42.158369  279050 api_server.go:141] control plane version: v1.34.1
	I1126 20:22:42.158392  279050 api_server.go:131] duration metric: took 1.005661792s to wait for apiserver health ...
	I1126 20:22:42.158401  279050 system_pods.go:43] waiting for kube-system pods to appear ...
	I1126 20:22:42.161910  279050 system_pods.go:59] 8 kube-system pods found
	I1126 20:22:42.161934  279050 system_pods.go:61] "coredns-66bc5c9577-wl4xp" [e1cf9739-1b9a-44d7-a932-447ac94e142d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:22:42.161942  279050 system_pods.go:61] "etcd-no-preload-026579" [cce16df8-4867-47fc-acec-5b1651799367] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1126 20:22:42.161952  279050 system_pods.go:61] "kindnet-8rfpj" [09d32618-edb6-49f0-b9ce-af0f0751b53f] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1126 20:22:42.161968  279050 system_pods.go:61] "kube-apiserver-no-preload-026579" [4119730b-bf6c-4d48-9883-13ca12edeabc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1126 20:22:42.161984  279050 system_pods.go:61] "kube-controller-manager-no-preload-026579" [c583b53a-df53-4d77-995f-c6c7d1189418] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1126 20:22:42.161995  279050 system_pods.go:61] "kube-proxy-ktbwp" [93566a91-b6dc-47fa-9d46-9ebf0fc4704a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1126 20:22:42.162008  279050 system_pods.go:61] "kube-scheduler-no-preload-026579" [0ae4f2d1-175e-45c0-bbb2-eedf826e6fb4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1126 20:22:42.162015  279050 system_pods.go:61] "storage-provisioner" [e2f8aa92-297b-4be4-a3a2-45a956763aad] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 20:22:42.162021  279050 system_pods.go:74] duration metric: took 3.614709ms to wait for pod list to return data ...
	I1126 20:22:42.162029  279050 default_sa.go:34] waiting for default service account to be created ...
	I1126 20:22:42.164140  279050 default_sa.go:45] found service account: "default"
	I1126 20:22:42.164157  279050 default_sa.go:55] duration metric: took 2.123726ms for default service account to be created ...
	I1126 20:22:42.164165  279050 system_pods.go:116] waiting for k8s-apps to be running ...
	I1126 20:22:42.166895  279050 system_pods.go:86] 8 kube-system pods found
	I1126 20:22:42.166923  279050 system_pods.go:89] "coredns-66bc5c9577-wl4xp" [e1cf9739-1b9a-44d7-a932-447ac94e142d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:22:42.166933  279050 system_pods.go:89] "etcd-no-preload-026579" [cce16df8-4867-47fc-acec-5b1651799367] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1126 20:22:42.166942  279050 system_pods.go:89] "kindnet-8rfpj" [09d32618-edb6-49f0-b9ce-af0f0751b53f] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1126 20:22:42.166955  279050 system_pods.go:89] "kube-apiserver-no-preload-026579" [4119730b-bf6c-4d48-9883-13ca12edeabc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1126 20:22:42.166963  279050 system_pods.go:89] "kube-controller-manager-no-preload-026579" [c583b53a-df53-4d77-995f-c6c7d1189418] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1126 20:22:42.166986  279050 system_pods.go:89] "kube-proxy-ktbwp" [93566a91-b6dc-47fa-9d46-9ebf0fc4704a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1126 20:22:42.167013  279050 system_pods.go:89] "kube-scheduler-no-preload-026579" [0ae4f2d1-175e-45c0-bbb2-eedf826e6fb4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1126 20:22:42.167025  279050 system_pods.go:89] "storage-provisioner" [e2f8aa92-297b-4be4-a3a2-45a956763aad] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 20:22:42.167036  279050 system_pods.go:126] duration metric: took 2.86619ms to wait for k8s-apps to be running ...
	I1126 20:22:42.167048  279050 system_svc.go:44] waiting for kubelet service to be running ....
	I1126 20:22:42.167096  279050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:22:42.179063  279050 system_svc.go:56] duration metric: took 12.010286ms WaitForService to wait for kubelet
	I1126 20:22:42.179086  279050 kubeadm.go:587] duration metric: took 3.056293076s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1126 20:22:42.179104  279050 node_conditions.go:102] verifying NodePressure condition ...
	I1126 20:22:42.181486  279050 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1126 20:22:42.181505  279050 node_conditions.go:123] node cpu capacity is 8
	I1126 20:22:42.181517  279050 node_conditions.go:105] duration metric: took 2.408547ms to run NodePressure ...
	I1126 20:22:42.181527  279050 start.go:242] waiting for startup goroutines ...
	I1126 20:22:42.181536  279050 start.go:247] waiting for cluster config update ...
	I1126 20:22:42.181545  279050 start.go:256] writing updated cluster config ...
	I1126 20:22:42.181758  279050 ssh_runner.go:195] Run: rm -f paused
	I1126 20:22:42.185430  279050 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 20:22:42.188391  279050 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wl4xp" in "kube-system" namespace to be "Ready" or be gone ...
	W1126 20:22:44.193372  279050 pod_ready.go:104] pod "coredns-66bc5c9577-wl4xp" is not "Ready", error: <nil>
	W1126 20:22:46.193941  279050 pod_ready.go:104] pod "coredns-66bc5c9577-wl4xp" is not "Ready", error: <nil>
	I1126 20:22:44.775191  281230 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1126 20:22:44.775236  281230 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1126 20:22:44.775284  281230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-949294
	I1126 20:22:44.802026  281230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/embed-certs-949294/id_rsa Username:docker}
	I1126 20:22:44.804445  281230 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1126 20:22:44.804510  281230 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1126 20:22:44.804668  281230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-949294
	I1126 20:22:44.809300  281230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/embed-certs-949294/id_rsa Username:docker}
	I1126 20:22:44.836635  281230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/embed-certs-949294/id_rsa Username:docker}
	I1126 20:22:44.906402  281230 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:22:44.926683  281230 node_ready.go:35] waiting up to 6m0s for node "embed-certs-949294" to be "Ready" ...
	I1126 20:22:44.942037  281230 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:22:44.943189  281230 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1126 20:22:44.943289  281230 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1126 20:22:44.958004  281230 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1126 20:22:44.964275  281230 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1126 20:22:44.964293  281230 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1126 20:22:44.988499  281230 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1126 20:22:44.988525  281230 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1126 20:22:45.008309  281230 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1126 20:22:45.008331  281230 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1126 20:22:45.030026  281230 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1126 20:22:45.030061  281230 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1126 20:22:45.054222  281230 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1126 20:22:45.054247  281230 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1126 20:22:45.075321  281230 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1126 20:22:45.075344  281230 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1126 20:22:45.092705  281230 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1126 20:22:45.092729  281230 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1126 20:22:45.109718  281230 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1126 20:22:45.109739  281230 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1126 20:22:45.123556  281230 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1126 20:22:46.834584  281230 node_ready.go:49] node "embed-certs-949294" is "Ready"
	I1126 20:22:46.834631  281230 node_ready.go:38] duration metric: took 1.907908732s for node "embed-certs-949294" to be "Ready" ...
	I1126 20:22:46.834647  281230 api_server.go:52] waiting for apiserver process to appear ...
	I1126 20:22:46.834802  281230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:22:47.646270  281230 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.704087312s)
	I1126 20:22:47.646325  281230 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.688236386s)
	I1126 20:22:47.646452  281230 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.522860781s)
	I1126 20:22:47.646922  281230 api_server.go:72] duration metric: took 2.906037516s to wait for apiserver process to appear ...
	I1126 20:22:47.646942  281230 api_server.go:88] waiting for apiserver healthz status ...
	I1126 20:22:47.646959  281230 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1126 20:22:47.650745  281230 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-949294 addons enable metrics-server
	
	I1126 20:22:45.981988  283132 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1126 20:22:45.987899  283132 fix.go:56] duration metric: took 4.876561031s for fixHost
	I1126 20:22:45.987927  283132 start.go:83] releasing machines lock for "newest-cni-297942", held for 4.876610638s
	I1126 20:22:45.987992  283132 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-297942
	I1126 20:22:46.011274  283132 ssh_runner.go:195] Run: cat /version.json
	I1126 20:22:46.011335  283132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-297942
	I1126 20:22:46.011553  283132 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1126 20:22:46.011634  283132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-297942
	I1126 20:22:46.035874  283132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/newest-cni-297942/id_rsa Username:docker}
	I1126 20:22:46.038422  283132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/newest-cni-297942/id_rsa Username:docker}
	I1126 20:22:46.145928  283132 ssh_runner.go:195] Run: systemctl --version
	I1126 20:22:46.208754  283132 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1126 20:22:46.260685  283132 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1126 20:22:46.266786  283132 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1126 20:22:46.266850  283132 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1126 20:22:46.279170  283132 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1126 20:22:46.279196  283132 start.go:496] detecting cgroup driver to use...
	I1126 20:22:46.279228  283132 detect.go:190] detected "systemd" cgroup driver on host os
	I1126 20:22:46.279279  283132 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1126 20:22:46.296769  283132 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1126 20:22:46.312842  283132 docker.go:218] disabling cri-docker service (if available) ...
	I1126 20:22:46.313623  283132 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1126 20:22:46.336404  283132 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1126 20:22:46.362833  283132 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1126 20:22:46.485694  283132 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1126 20:22:46.608625  283132 docker.go:234] disabling docker service ...
	I1126 20:22:46.608710  283132 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1126 20:22:46.627969  283132 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1126 20:22:46.647325  283132 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1126 20:22:46.777835  283132 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1126 20:22:46.941504  283132 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1126 20:22:46.960693  283132 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1126 20:22:46.980499  283132 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1126 20:22:46.980558  283132 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:22:46.994995  283132 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1126 20:22:46.995161  283132 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:22:47.007396  283132 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:22:47.019337  283132 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:22:47.031265  283132 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1126 20:22:47.041699  283132 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:22:47.052215  283132 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:22:47.063748  283132 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:22:47.075564  283132 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1126 20:22:47.087066  283132 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1126 20:22:47.098156  283132 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:22:47.230987  283132 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1126 20:22:47.533145  283132 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1126 20:22:47.533212  283132 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1126 20:22:47.539562  283132 start.go:564] Will wait 60s for crictl version
	I1126 20:22:47.539619  283132 ssh_runner.go:195] Run: which crictl
	I1126 20:22:47.545726  283132 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1126 20:22:47.577381  283132 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1126 20:22:47.577482  283132 ssh_runner.go:195] Run: crio --version
	I1126 20:22:47.614544  283132 ssh_runner.go:195] Run: crio --version
	I1126 20:22:47.654164  283132 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1126 20:22:47.652263  281230 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1126 20:22:47.652284  281230 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1126 20:22:47.661749  281230 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1126 20:22:47.655252  283132 cli_runner.go:164] Run: docker network inspect newest-cni-297942 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1126 20:22:47.676378  283132 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1126 20:22:47.681380  283132 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:22:47.696551  283132 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1126 20:22:47.697725  283132 kubeadm.go:884] updating cluster {Name:newest-cni-297942 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-297942 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1126 20:22:47.697864  283132 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:22:47.697953  283132 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:22:47.737614  283132 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:22:47.737644  283132 crio.go:433] Images already preloaded, skipping extraction
	I1126 20:22:47.737710  283132 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:22:47.769807  283132 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:22:47.769838  283132 cache_images.go:86] Images are preloaded, skipping loading
	I1126 20:22:47.769848  283132 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1126 20:22:47.769987  283132 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-297942 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-297942 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1126 20:22:47.770072  283132 ssh_runner.go:195] Run: crio config
	I1126 20:22:47.833805  283132 cni.go:84] Creating CNI manager for ""
	I1126 20:22:47.833849  283132 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:22:47.833867  283132 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1126 20:22:47.833903  283132 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-297942 NodeName:newest-cni-297942 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1126 20:22:47.834082  283132 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-297942"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1126 20:22:47.834169  283132 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1126 20:22:47.843484  283132 binaries.go:51] Found k8s binaries, skipping transfer
	I1126 20:22:47.843547  283132 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1126 20:22:47.853856  283132 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1126 20:22:47.868846  283132 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1126 20:22:47.885385  283132 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1126 20:22:47.903633  283132 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1126 20:22:47.908802  283132 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:22:47.922224  283132 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:22:48.037628  283132 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:22:48.069247  283132 certs.go:69] Setting up /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/newest-cni-297942 for IP: 192.168.103.2
	I1126 20:22:48.069272  283132 certs.go:195] generating shared ca certs ...
	I1126 20:22:48.069292  283132 certs.go:227] acquiring lock for ca certs: {Name:mk4795cc985d88cb969cdd6a9c35d3c72f02dfea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:22:48.069497  283132 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21974-10722/.minikube/ca.key
	I1126 20:22:48.069570  283132 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.key
	I1126 20:22:48.069587  283132 certs.go:257] generating profile certs ...
	I1126 20:22:48.069711  283132 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/newest-cni-297942/client.key
	I1126 20:22:48.069784  283132 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/newest-cni-297942/apiserver.key.9b9f8b84
	I1126 20:22:48.069880  283132 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/newest-cni-297942/proxy-client.key
	I1126 20:22:48.070067  283132 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/14258.pem (1338 bytes)
	W1126 20:22:48.070122  283132 certs.go:480] ignoring /home/jenkins/minikube-integration/21974-10722/.minikube/certs/14258_empty.pem, impossibly tiny 0 bytes
	I1126 20:22:48.070133  283132 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem (1679 bytes)
	I1126 20:22:48.070169  283132 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem (1078 bytes)
	I1126 20:22:48.070199  283132 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem (1123 bytes)
	I1126 20:22:48.070235  283132 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem (1675 bytes)
	I1126 20:22:48.070293  283132 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem (1708 bytes)
	I1126 20:22:48.071194  283132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1126 20:22:48.097890  283132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1126 20:22:48.121561  283132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1126 20:22:48.146613  283132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1126 20:22:48.176193  283132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/newest-cni-297942/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1126 20:22:48.202051  283132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/newest-cni-297942/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1126 20:22:48.225070  283132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/newest-cni-297942/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1126 20:22:48.246760  283132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/newest-cni-297942/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1126 20:22:48.269084  283132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem --> /usr/share/ca-certificates/142582.pem (1708 bytes)
	I1126 20:22:48.292062  283132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1126 20:22:48.313735  283132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/certs/14258.pem --> /usr/share/ca-certificates/14258.pem (1338 bytes)
	I1126 20:22:48.335657  283132 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1126 20:22:48.351074  283132 ssh_runner.go:195] Run: openssl version
	I1126 20:22:48.358937  283132 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142582.pem && ln -fs /usr/share/ca-certificates/142582.pem /etc/ssl/certs/142582.pem"
	I1126 20:22:48.369856  283132 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142582.pem
	I1126 20:22:48.375367  283132 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 26 19:41 /usr/share/ca-certificates/142582.pem
	I1126 20:22:48.375419  283132 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142582.pem
	I1126 20:22:48.428766  283132 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142582.pem /etc/ssl/certs/3ec20f2e.0"
	I1126 20:22:48.439674  283132 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1126 20:22:48.450900  283132 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:22:48.455705  283132 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 26 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:22:48.455757  283132 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:22:48.509707  283132 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1126 20:22:48.520864  283132 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14258.pem && ln -fs /usr/share/ca-certificates/14258.pem /etc/ssl/certs/14258.pem"
	I1126 20:22:48.532096  283132 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14258.pem
	I1126 20:22:48.536714  283132 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 26 19:41 /usr/share/ca-certificates/14258.pem
	I1126 20:22:48.536763  283132 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14258.pem
	I1126 20:22:48.592642  283132 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14258.pem /etc/ssl/certs/51391683.0"
	I1126 20:22:48.602562  283132 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1126 20:22:48.607725  283132 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1126 20:22:48.668271  283132 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1126 20:22:48.723058  283132 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1126 20:22:48.766993  283132 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1126 20:22:48.809051  283132 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1126 20:22:48.869800  283132 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1126 20:22:48.933325  283132 kubeadm.go:401] StartCluster: {Name:newest-cni-297942 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-297942 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:22:48.933433  283132 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 20:22:48.933507  283132 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 20:22:48.969182  283132 cri.go:89] found id: ""
	I1126 20:22:48.969273  283132 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1126 20:22:48.980080  283132 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1126 20:22:48.980099  283132 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1126 20:22:48.980145  283132 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1126 20:22:48.990153  283132 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1126 20:22:48.991382  283132 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-297942" does not appear in /home/jenkins/minikube-integration/21974-10722/kubeconfig
	I1126 20:22:48.992253  283132 kubeconfig.go:62] /home/jenkins/minikube-integration/21974-10722/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-297942" cluster setting kubeconfig missing "newest-cni-297942" context setting]
	I1126 20:22:48.993562  283132 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/kubeconfig: {Name:mk6fc897a3190afd853d8f239c80394df10dbd39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:22:48.995871  283132 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1126 20:22:49.006243  283132 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1126 20:22:49.006272  283132 kubeadm.go:602] duration metric: took 26.166791ms to restartPrimaryControlPlane
	I1126 20:22:49.006282  283132 kubeadm.go:403] duration metric: took 72.966028ms to StartCluster
	I1126 20:22:49.006297  283132 settings.go:142] acquiring lock: {Name:mkeab3f1dbcfb88a16fda32bff13c520a9a811ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:22:49.006353  283132 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21974-10722/kubeconfig
	I1126 20:22:49.008962  283132 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/kubeconfig: {Name:mk6fc897a3190afd853d8f239c80394df10dbd39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:22:49.010081  283132 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1126 20:22:49.010330  283132 config.go:182] Loaded profile config "newest-cni-297942": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:22:49.010385  283132 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1126 20:22:49.010493  283132 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-297942"
	I1126 20:22:49.010512  283132 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-297942"
	W1126 20:22:49.010523  283132 addons.go:248] addon storage-provisioner should already be in state true
	I1126 20:22:49.010550  283132 host.go:66] Checking if "newest-cni-297942" exists ...
	I1126 20:22:49.010793  283132 addons.go:70] Setting dashboard=true in profile "newest-cni-297942"
	I1126 20:22:49.010822  283132 addons.go:70] Setting default-storageclass=true in profile "newest-cni-297942"
	I1126 20:22:49.010829  283132 addons.go:239] Setting addon dashboard=true in "newest-cni-297942"
	W1126 20:22:49.010840  283132 addons.go:248] addon dashboard should already be in state true
	I1126 20:22:49.010844  283132 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-297942"
	I1126 20:22:49.010864  283132 host.go:66] Checking if "newest-cni-297942" exists ...
	I1126 20:22:49.011039  283132 cli_runner.go:164] Run: docker container inspect newest-cni-297942 --format={{.State.Status}}
	I1126 20:22:49.011163  283132 cli_runner.go:164] Run: docker container inspect newest-cni-297942 --format={{.State.Status}}
	I1126 20:22:49.011281  283132 cli_runner.go:164] Run: docker container inspect newest-cni-297942 --format={{.State.Status}}
	I1126 20:22:49.039942  283132 addons.go:239] Setting addon default-storageclass=true in "newest-cni-297942"
	W1126 20:22:49.039969  283132 addons.go:248] addon default-storageclass should already be in state true
	I1126 20:22:49.039995  283132 host.go:66] Checking if "newest-cni-297942" exists ...
	I1126 20:22:49.040473  283132 cli_runner.go:164] Run: docker container inspect newest-cni-297942 --format={{.State.Status}}
	I1126 20:22:49.062659  283132 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1126 20:22:49.062681  283132 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1126 20:22:49.062734  283132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-297942
	I1126 20:22:49.071753  283132 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1126 20:22:49.071754  283132 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1126 20:22:49.071760  283132 out.go:179] * Verifying Kubernetes components...
	I1126 20:22:49.083205  283132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/newest-cni-297942/id_rsa Username:docker}
	I1126 20:22:49.093615  283132 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:22:49.093646  283132 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1126 20:22:49.093716  283132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-297942
	I1126 20:22:49.094772  283132 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:22:49.095752  283132 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1126 20:22:49.098197  283132 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1126 20:22:49.098216  283132 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1126 20:22:49.098302  283132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-297942
	I1126 20:22:49.120042  283132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/newest-cni-297942/id_rsa Username:docker}
	I1126 20:22:49.124517  283132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/newest-cni-297942/id_rsa Username:docker}
	I1126 20:22:49.223673  283132 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1126 20:22:49.233917  283132 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:22:49.244980  283132 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:22:49.257038  283132 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1126 20:22:49.257061  283132 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1126 20:22:49.295636  283132 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1126 20:22:49.295664  283132 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	W1126 20:22:49.312492  283132 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1126 20:22:49.312533  283132 retry.go:31] will retry after 141.575876ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1126 20:22:49.312612  283132 api_server.go:52] waiting for apiserver process to appear ...
	I1126 20:22:49.312669  283132 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:22:49.321556  283132 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1126 20:22:49.321592  283132 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	W1126 20:22:49.344947  283132 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1126 20:22:49.344982  283132 retry.go:31] will retry after 218.049714ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1126 20:22:49.345028  283132 api_server.go:72] duration metric: took 334.915012ms to wait for apiserver process to appear ...
	I1126 20:22:49.345038  283132 api_server.go:88] waiting for apiserver healthz status ...
	I1126 20:22:49.345054  283132 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1126 20:22:49.345834  283132 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1126 20:22:49.345938  283132 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1126 20:22:49.346111  283132 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1126 20:22:49.369397  283132 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1126 20:22:49.369420  283132 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1126 20:22:49.390504  283132 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1126 20:22:49.390683  283132 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1126 20:22:49.408410  283132 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1126 20:22:49.408441  283132 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1126 20:22:49.426482  283132 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1126 20:22:49.426503  283132 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1126 20:22:49.442793  283132 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1126 20:22:49.442870  283132 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1126 20:22:49.454437  283132 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1126 20:22:49.461179  283132 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1126 20:22:49.563685  283132 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:22:49.845496  283132 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	W1126 20:22:48.197694  279050 pod_ready.go:104] pod "coredns-66bc5c9577-wl4xp" is not "Ready", error: <nil>
	W1126 20:22:50.201188  279050 pod_ready.go:104] pod "coredns-66bc5c9577-wl4xp" is not "Ready", error: <nil>
	I1126 20:22:51.277974  283132 api_server.go:279] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1126 20:22:51.278018  283132 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1126 20:22:51.278039  283132 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1126 20:22:51.287748  283132 api_server.go:279] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1126 20:22:51.287777  283132 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1126 20:22:51.345992  283132 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1126 20:22:51.353164  283132 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1126 20:22:51.353197  283132 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1126 20:22:51.403236  283132 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (1.948765551s)
	I1126 20:22:51.845876  283132 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1126 20:22:51.854352  283132 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1126 20:22:51.854381  283132 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1126 20:22:51.937991  283132 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.476761053s)
	I1126 20:22:51.940235  283132 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-297942 addons enable metrics-server
	
	I1126 20:22:52.048989  283132 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.485263917s)
	I1126 20:22:52.050773  283132 out.go:179] * Enabled addons: default-storageclass, dashboard, storage-provisioner
	I1126 20:22:47.665529  281230 addons.go:530] duration metric: took 2.924403622s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1126 20:22:48.147073  281230 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1126 20:22:48.153314  281230 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1126 20:22:48.154522  281230 api_server.go:141] control plane version: v1.34.1
	I1126 20:22:48.154551  281230 api_server.go:131] duration metric: took 507.601137ms to wait for apiserver health ...
	I1126 20:22:48.154562  281230 system_pods.go:43] waiting for kube-system pods to appear ...
	I1126 20:22:48.159761  281230 system_pods.go:59] 8 kube-system pods found
	I1126 20:22:48.159808  281230 system_pods.go:61] "coredns-66bc5c9577-s8rrr" [fb1d5777-ea89-40cb-ae5c-dd8bde47f3de] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:22:48.159819  281230 system_pods.go:61] "etcd-embed-certs-949294" [ae44b779-edef-49c6-8b68-b6114e9a4d68] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1126 20:22:48.159827  281230 system_pods.go:61] "kindnet-9546l" [5f44d5e6-677c-4df7-9534-bfdf1e6b06b4] Running
	I1126 20:22:48.159836  281230 system_pods.go:61] "kube-apiserver-embed-certs-949294" [25d397fc-5140-407f-80f8-e05072c2d44b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1126 20:22:48.159858  281230 system_pods.go:61] "kube-controller-manager-embed-certs-949294" [9f206785-00f8-4fb7-a52c-95ea02516271] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1126 20:22:48.159867  281230 system_pods.go:61] "kube-proxy-qnjvr" [d9dba8a9-9c13-46e2-9ada-a2b8daca8d73] Running
	I1126 20:22:48.159875  281230 system_pods.go:61] "kube-scheduler-embed-certs-949294" [26c4da52-2b0a-4a83-9549-de780ffeddb0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1126 20:22:48.159880  281230 system_pods.go:61] "storage-provisioner" [ad12d5e5-d681-4dfc-9970-d2340ac55ed7] Running
	I1126 20:22:48.159888  281230 system_pods.go:74] duration metric: took 5.318838ms to wait for pod list to return data ...
	I1126 20:22:48.159896  281230 default_sa.go:34] waiting for default service account to be created ...
	I1126 20:22:48.163237  281230 default_sa.go:45] found service account: "default"
	I1126 20:22:48.163425  281230 default_sa.go:55] duration metric: took 3.520246ms for default service account to be created ...
	I1126 20:22:48.163453  281230 system_pods.go:116] waiting for k8s-apps to be running ...
	I1126 20:22:48.167512  281230 system_pods.go:86] 8 kube-system pods found
	I1126 20:22:48.168002  281230 system_pods.go:89] "coredns-66bc5c9577-s8rrr" [fb1d5777-ea89-40cb-ae5c-dd8bde47f3de] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:22:48.168069  281230 system_pods.go:89] "etcd-embed-certs-949294" [ae44b779-edef-49c6-8b68-b6114e9a4d68] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1126 20:22:48.168093  281230 system_pods.go:89] "kindnet-9546l" [5f44d5e6-677c-4df7-9534-bfdf1e6b06b4] Running
	I1126 20:22:48.168114  281230 system_pods.go:89] "kube-apiserver-embed-certs-949294" [25d397fc-5140-407f-80f8-e05072c2d44b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1126 20:22:48.168149  281230 system_pods.go:89] "kube-controller-manager-embed-certs-949294" [9f206785-00f8-4fb7-a52c-95ea02516271] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1126 20:22:48.168176  281230 system_pods.go:89] "kube-proxy-qnjvr" [d9dba8a9-9c13-46e2-9ada-a2b8daca8d73] Running
	I1126 20:22:48.168197  281230 system_pods.go:89] "kube-scheduler-embed-certs-949294" [26c4da52-2b0a-4a83-9549-de780ffeddb0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1126 20:22:48.168213  281230 system_pods.go:89] "storage-provisioner" [ad12d5e5-d681-4dfc-9970-d2340ac55ed7] Running
	I1126 20:22:48.168233  281230 system_pods.go:126] duration metric: took 4.719858ms to wait for k8s-apps to be running ...
	I1126 20:22:48.168284  281230 system_svc.go:44] waiting for kubelet service to be running ....
	I1126 20:22:48.168353  281230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:22:48.189258  281230 system_svc.go:56] duration metric: took 20.967364ms WaitForService to wait for kubelet
	I1126 20:22:48.189288  281230 kubeadm.go:587] duration metric: took 3.448403882s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1126 20:22:48.189311  281230 node_conditions.go:102] verifying NodePressure condition ...
	I1126 20:22:48.194077  281230 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1126 20:22:48.194116  281230 node_conditions.go:123] node cpu capacity is 8
	I1126 20:22:48.194135  281230 node_conditions.go:105] duration metric: took 4.818329ms to run NodePressure ...
	I1126 20:22:48.194150  281230 start.go:242] waiting for startup goroutines ...
	I1126 20:22:48.194164  281230 start.go:247] waiting for cluster config update ...
	I1126 20:22:48.194178  281230 start.go:256] writing updated cluster config ...
	I1126 20:22:48.194454  281230 ssh_runner.go:195] Run: rm -f paused
	I1126 20:22:48.199326  281230 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 20:22:48.204363  281230 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-s8rrr" in "kube-system" namespace to be "Ready" or be gone ...
	W1126 20:22:50.231611  281230 pod_ready.go:104] pod "coredns-66bc5c9577-s8rrr" is not "Ready", error: <nil>
	I1126 20:22:52.051919  283132 addons.go:530] duration metric: took 3.041532347s for enable addons: enabled=[default-storageclass dashboard storage-provisioner]
	I1126 20:22:52.345587  283132 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1126 20:22:52.350543  283132 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1126 20:22:52.350570  283132 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1126 20:22:52.846025  283132 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1126 20:22:52.851313  283132 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1126 20:22:52.852557  283132 api_server.go:141] control plane version: v1.34.1
	I1126 20:22:52.852582  283132 api_server.go:131] duration metric: took 3.507536375s to wait for apiserver health ...
	I1126 20:22:52.852593  283132 system_pods.go:43] waiting for kube-system pods to appear ...
	I1126 20:22:52.856745  283132 system_pods.go:59] 8 kube-system pods found
	I1126 20:22:52.856818  283132 system_pods.go:61] "coredns-66bc5c9577-bnszr" [ddf077eb-a9c4-42f2-a9b7-0aced551aa38] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1126 20:22:52.856864  283132 system_pods.go:61] "etcd-newest-cni-297942" [6520dcdd-9b71-4c83-8e54-7421dd7034af] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1126 20:22:52.856881  283132 system_pods.go:61] "kindnet-wlhp7" [a6a459a7-87d9-4628-ad09-7e6e8d8445da] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1126 20:22:52.856908  283132 system_pods.go:61] "kube-apiserver-newest-cni-297942" [7c910df8-6020-46fb-a380-09a0698b3720] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1126 20:22:52.856922  283132 system_pods.go:61] "kube-controller-manager-newest-cni-297942" [66f96670-85f0-47d1-859b-4844b80909d4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1126 20:22:52.856931  283132 system_pods.go:61] "kube-proxy-lx6vw" [6e8b3fed-5b44-42fd-9259-f59bfc5b1f0c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1126 20:22:52.856939  283132 system_pods.go:61] "kube-scheduler-newest-cni-297942" [4d59e692-80ac-4baa-9316-d8930f423531] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1126 20:22:52.856947  283132 system_pods.go:61] "storage-provisioner" [815d8b30-f9a4-4565-9f15-f45940446bd1] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1126 20:22:52.856955  283132 system_pods.go:74] duration metric: took 4.355286ms to wait for pod list to return data ...
	I1126 20:22:52.856965  283132 default_sa.go:34] waiting for default service account to be created ...
	I1126 20:22:52.859730  283132 default_sa.go:45] found service account: "default"
	I1126 20:22:52.859762  283132 default_sa.go:55] duration metric: took 2.779407ms for default service account to be created ...
	I1126 20:22:52.859775  283132 kubeadm.go:587] duration metric: took 3.849662669s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1126 20:22:52.859793  283132 node_conditions.go:102] verifying NodePressure condition ...
	I1126 20:22:52.862559  283132 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1126 20:22:52.862585  283132 node_conditions.go:123] node cpu capacity is 8
	I1126 20:22:52.862603  283132 node_conditions.go:105] duration metric: took 2.80479ms to run NodePressure ...
	I1126 20:22:52.862617  283132 start.go:242] waiting for startup goroutines ...
	I1126 20:22:52.862626  283132 start.go:247] waiting for cluster config update ...
	I1126 20:22:52.862639  283132 start.go:256] writing updated cluster config ...
	I1126 20:22:52.863068  283132 ssh_runner.go:195] Run: rm -f paused
	I1126 20:22:52.938360  283132 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1126 20:22:52.940104  283132 out.go:179] * Done! kubectl is now configured to use "newest-cni-297942" cluster and "default" namespace by default
	W1126 20:22:52.694065  279050 pod_ready.go:104] pod "coredns-66bc5c9577-wl4xp" is not "Ready", error: <nil>
	W1126 20:22:54.694364  279050 pod_ready.go:104] pod "coredns-66bc5c9577-wl4xp" is not "Ready", error: <nil>
	W1126 20:22:52.710895  281230 pod_ready.go:104] pod "coredns-66bc5c9577-s8rrr" is not "Ready", error: <nil>
	W1126 20:22:54.711079  281230 pod_ready.go:104] pod "coredns-66bc5c9577-s8rrr" is not "Ready", error: <nil>
	W1126 20:22:57.210491  281230 pod_ready.go:104] pod "coredns-66bc5c9577-s8rrr" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 26 20:22:52 newest-cni-297942 crio[521]: time="2025-11-26T20:22:52.467486438Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:22:52 newest-cni-297942 crio[521]: time="2025-11-26T20:22:52.473068167Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=42980c7b-bee7-482e-95b7-ef2927b19ae3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 26 20:22:52 newest-cni-297942 crio[521]: time="2025-11-26T20:22:52.473969937Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=deb7e6c2-9d46-4f41-9e28-4820819a4a06 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 26 20:22:52 newest-cni-297942 crio[521]: time="2025-11-26T20:22:52.47585817Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 26 20:22:52 newest-cni-297942 crio[521]: time="2025-11-26T20:22:52.476580163Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 26 20:22:52 newest-cni-297942 crio[521]: time="2025-11-26T20:22:52.476734369Z" level=info msg="Ran pod sandbox 5801de75823a0e6217a434dce6ba4077162377b24ceb13370b3d512ac33700dc with infra container: kube-system/kindnet-wlhp7/POD" id=deb7e6c2-9d46-4f41-9e28-4820819a4a06 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 26 20:22:52 newest-cni-297942 crio[521]: time="2025-11-26T20:22:52.477497012Z" level=info msg="Ran pod sandbox a3a69b0c4857c0198f6c9f17b7f83b9ff78520fc6a633e61883c9473ad0a96bd with infra container: kube-system/kube-proxy-lx6vw/POD" id=42980c7b-bee7-482e-95b7-ef2927b19ae3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 26 20:22:52 newest-cni-297942 crio[521]: time="2025-11-26T20:22:52.478198147Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=52293e5d-09a9-42c2-914c-c253017c66b6 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:22:52 newest-cni-297942 crio[521]: time="2025-11-26T20:22:52.478403343Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=f02276b5-1a2d-42ac-88a3-e9d3bd76fb5c name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:22:52 newest-cni-297942 crio[521]: time="2025-11-26T20:22:52.479654378Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=5ce84deb-7532-4eb8-bd61-cad169f6dc3a name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:22:52 newest-cni-297942 crio[521]: time="2025-11-26T20:22:52.480221628Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=be227d1c-8839-4482-be29-31307e285e15 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:22:52 newest-cni-297942 crio[521]: time="2025-11-26T20:22:52.480967306Z" level=info msg="Creating container: kube-system/kindnet-wlhp7/kindnet-cni" id=e82dd60f-f707-41f8-93a8-d271adb935ef name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:22:52 newest-cni-297942 crio[521]: time="2025-11-26T20:22:52.481058468Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:22:52 newest-cni-297942 crio[521]: time="2025-11-26T20:22:52.481255877Z" level=info msg="Creating container: kube-system/kube-proxy-lx6vw/kube-proxy" id=a4957c48-3910-4123-9c9f-593fdbb5b8e6 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:22:52 newest-cni-297942 crio[521]: time="2025-11-26T20:22:52.481361685Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:22:52 newest-cni-297942 crio[521]: time="2025-11-26T20:22:52.486448795Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:22:52 newest-cni-297942 crio[521]: time="2025-11-26T20:22:52.487260579Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:22:52 newest-cni-297942 crio[521]: time="2025-11-26T20:22:52.48966009Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:22:52 newest-cni-297942 crio[521]: time="2025-11-26T20:22:52.490349305Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:22:52 newest-cni-297942 crio[521]: time="2025-11-26T20:22:52.514407584Z" level=info msg="Created container 0bde763b33e49c60ef0ea171446deb3af31ee6ffb05e6de302bf2264a5ab2874: kube-system/kindnet-wlhp7/kindnet-cni" id=e82dd60f-f707-41f8-93a8-d271adb935ef name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:22:52 newest-cni-297942 crio[521]: time="2025-11-26T20:22:52.515176503Z" level=info msg="Starting container: 0bde763b33e49c60ef0ea171446deb3af31ee6ffb05e6de302bf2264a5ab2874" id=17d731e5-b509-41d9-824d-70c371b85119 name=/runtime.v1.RuntimeService/StartContainer
	Nov 26 20:22:52 newest-cni-297942 crio[521]: time="2025-11-26T20:22:52.517591988Z" level=info msg="Started container" PID=1049 containerID=0bde763b33e49c60ef0ea171446deb3af31ee6ffb05e6de302bf2264a5ab2874 description=kube-system/kindnet-wlhp7/kindnet-cni id=17d731e5-b509-41d9-824d-70c371b85119 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5801de75823a0e6217a434dce6ba4077162377b24ceb13370b3d512ac33700dc
	Nov 26 20:22:52 newest-cni-297942 crio[521]: time="2025-11-26T20:22:52.521902042Z" level=info msg="Created container 79a3e043f4e7140d7b0ca23ced001e62aaef6584f996e79c9766818d3dc88c0c: kube-system/kube-proxy-lx6vw/kube-proxy" id=a4957c48-3910-4123-9c9f-593fdbb5b8e6 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:22:52 newest-cni-297942 crio[521]: time="2025-11-26T20:22:52.522822809Z" level=info msg="Starting container: 79a3e043f4e7140d7b0ca23ced001e62aaef6584f996e79c9766818d3dc88c0c" id=3cb4b460-83e1-446f-b888-bb24002e2b29 name=/runtime.v1.RuntimeService/StartContainer
	Nov 26 20:22:52 newest-cni-297942 crio[521]: time="2025-11-26T20:22:52.526245991Z" level=info msg="Started container" PID=1050 containerID=79a3e043f4e7140d7b0ca23ced001e62aaef6584f996e79c9766818d3dc88c0c description=kube-system/kube-proxy-lx6vw/kube-proxy id=3cb4b460-83e1-446f-b888-bb24002e2b29 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a3a69b0c4857c0198f6c9f17b7f83b9ff78520fc6a633e61883c9473ad0a96bd
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	79a3e043f4e71       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   6 seconds ago       Running             kube-proxy                1                   a3a69b0c4857c       kube-proxy-lx6vw                            kube-system
	0bde763b33e49       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   6 seconds ago       Running             kindnet-cni               1                   5801de75823a0       kindnet-wlhp7                               kube-system
	0b56ca8f427ee       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   9 seconds ago       Running             etcd                      1                   b883884666259       etcd-newest-cni-297942                      kube-system
	c6b80fb157b18       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   9 seconds ago       Running             kube-apiserver            1                   3a058b020d993       kube-apiserver-newest-cni-297942            kube-system
	db4659bb85541       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   9 seconds ago       Running             kube-scheduler            1                   2ee8599e03cda       kube-scheduler-newest-cni-297942            kube-system
	cc32d8959eb06       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   9 seconds ago       Running             kube-controller-manager   1                   416c19c6efc84       kube-controller-manager-newest-cni-297942   kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-297942
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-297942
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970
	                    minikube.k8s.io/name=newest-cni-297942
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_26T20_22_30_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 26 Nov 2025 20:22:27 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-297942
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 26 Nov 2025 20:22:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 26 Nov 2025 20:22:51 +0000   Wed, 26 Nov 2025 20:22:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 26 Nov 2025 20:22:51 +0000   Wed, 26 Nov 2025 20:22:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 26 Nov 2025 20:22:51 +0000   Wed, 26 Nov 2025 20:22:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 26 Nov 2025 20:22:51 +0000   Wed, 26 Nov 2025 20:22:25 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    newest-cni-297942
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                8cbd9667-abfd-484d-8f07-0a0070bb411f
	  Boot ID:                    4ab83601-3d95-4490-96bc-14b416ba2714
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-297942                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-wlhp7                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-newest-cni-297942             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-newest-cni-297942    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-lx6vw                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-newest-cni-297942             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 23s   kube-proxy       
	  Normal  Starting                 6s    kube-proxy       
	  Normal  Starting                 30s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s   kubelet          Node newest-cni-297942 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s   kubelet          Node newest-cni-297942 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s   kubelet          Node newest-cni-297942 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s   node-controller  Node newest-cni-297942 event: Registered Node newest-cni-297942 in Controller
	  Normal  RegisteredNode           5s    node-controller  Node newest-cni-297942 event: Registered Node newest-cni-297942 in Controller
	
	
	==> dmesg <==
	[  +0.091832] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023778] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.010162] kauditd_printk_skb: 47 callbacks suppressed
	[Nov26 19:37] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.011177] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.022896] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.024869] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.022915] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.023864] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +2.047793] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +4.032568] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +8.126198] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[ +16.382331] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[Nov26 19:38] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	
	
	==> etcd [0b56ca8f427ee8a88caaf56897449390deb07473ded1db55b4a4435b4a244998] <==
	{"level":"warn","ts":"2025-11-26T20:22:50.401180Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:50.409484Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:50.419540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:50.432103Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:50.444199Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:50.454651Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:50.462046Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:50.471417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:50.489665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:50.498971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:50.506892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:50.515617Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:50.524331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:50.531857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:50.540806Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:50.549096Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:50.565279Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:50.573849Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:50.581924Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:50.590550Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:50.598973Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:50.612530Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:50.622250Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:50.630887Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:50.720582Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45120","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:22:59 up  1:05,  0 user,  load average: 4.65, 3.35, 2.15
	Linux newest-cni-297942 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0bde763b33e49c60ef0ea171446deb3af31ee6ffb05e6de302bf2264a5ab2874] <==
	I1126 20:22:52.767174       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1126 20:22:52.767412       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1126 20:22:52.767522       1 main.go:148] setting mtu 1500 for CNI 
	I1126 20:22:52.767541       1 main.go:178] kindnetd IP family: "ipv4"
	I1126 20:22:52.767551       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-26T20:22:52Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1126 20:22:52.970805       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1126 20:22:52.970833       1 controller.go:381] "Waiting for informer caches to sync"
	I1126 20:22:52.970843       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1126 20:22:52.971186       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1126 20:22:53.370920       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1126 20:22:53.370952       1 metrics.go:72] Registering metrics
	I1126 20:22:53.371001       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [c6b80fb157b1876098abb174101b5cbb41aab43e97ca6bbab5097ef0385c3b13] <==
	I1126 20:22:51.351944       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1126 20:22:51.351993       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1126 20:22:51.352013       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1126 20:22:51.353782       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1126 20:22:51.356339       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1126 20:22:51.358415       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1126 20:22:51.359210       1 aggregator.go:171] initial CRD sync complete...
	I1126 20:22:51.359223       1 autoregister_controller.go:144] Starting autoregister controller
	I1126 20:22:51.359229       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1126 20:22:51.359236       1 cache.go:39] Caches are synced for autoregister controller
	I1126 20:22:51.353795       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1126 20:22:51.353819       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1126 20:22:51.409942       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1126 20:22:51.411489       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1126 20:22:51.773359       1 controller.go:667] quota admission added evaluator for: namespaces
	I1126 20:22:51.811350       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1126 20:22:51.840125       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1126 20:22:51.853764       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1126 20:22:51.862679       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1126 20:22:51.916260       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.138.90"}
	I1126 20:22:51.932531       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.79.147"}
	I1126 20:22:52.258163       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1126 20:22:55.050561       1 controller.go:667] quota admission added evaluator for: endpoints
	I1126 20:22:55.098642       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1126 20:22:55.250052       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [cc32d8959eb0604113131904bc81d9b470aeaecc3fe9ac30a6213641dcb226f1] <==
	I1126 20:22:54.745950       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1126 20:22:54.745986       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1126 20:22:54.746122       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1126 20:22:54.746148       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1126 20:22:54.746357       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-297942"
	I1126 20:22:54.746392       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1126 20:22:54.746426       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1126 20:22:54.746587       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1126 20:22:54.747538       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1126 20:22:54.747630       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1126 20:22:54.748694       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1126 20:22:54.750669       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1126 20:22:54.751736       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1126 20:22:54.751767       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1126 20:22:54.751774       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1126 20:22:54.751825       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1126 20:22:54.751836       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1126 20:22:54.751842       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1126 20:22:54.756996       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1126 20:22:54.761289       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1126 20:22:54.763342       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1126 20:22:54.764483       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1126 20:22:54.769671       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1126 20:22:54.772932       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1126 20:22:54.777570       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	
	
	==> kube-proxy [79a3e043f4e7140d7b0ca23ced001e62aaef6584f996e79c9766818d3dc88c0c] <==
	I1126 20:22:52.571934       1 server_linux.go:53] "Using iptables proxy"
	I1126 20:22:52.627564       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1126 20:22:52.728663       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1126 20:22:52.728702       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1126 20:22:52.728806       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1126 20:22:52.746982       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1126 20:22:52.747041       1 server_linux.go:132] "Using iptables Proxier"
	I1126 20:22:52.753213       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1126 20:22:52.753765       1 server.go:527] "Version info" version="v1.34.1"
	I1126 20:22:52.753853       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 20:22:52.757671       1 config.go:200] "Starting service config controller"
	I1126 20:22:52.757741       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1126 20:22:52.757699       1 config.go:403] "Starting serviceCIDR config controller"
	I1126 20:22:52.757801       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1126 20:22:52.757714       1 config.go:106] "Starting endpoint slice config controller"
	I1126 20:22:52.757867       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1126 20:22:52.757875       1 config.go:309] "Starting node config controller"
	I1126 20:22:52.757944       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1126 20:22:52.757968       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1126 20:22:52.857889       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1126 20:22:52.857921       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1126 20:22:52.857889       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [db4659bb8554145bfacd475b919e39a9767e55ff868938d2d744b53d7838507e] <==
	I1126 20:22:50.383534       1 serving.go:386] Generated self-signed cert in-memory
	W1126 20:22:51.277120       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1126 20:22:51.277158       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1126 20:22:51.277170       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1126 20:22:51.277181       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1126 20:22:51.347574       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1126 20:22:51.347622       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 20:22:51.351700       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1126 20:22:51.351785       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1126 20:22:51.353020       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1126 20:22:51.353113       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1126 20:22:51.452001       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 26 20:22:51 newest-cni-297942 kubelet[664]: E1126 20:22:51.972912     664 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-297942\" already exists" pod="kube-system/kube-apiserver-newest-cni-297942"
	Nov 26 20:22:51 newest-cni-297942 kubelet[664]: I1126 20:22:51.972949     664 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-297942"
	Nov 26 20:22:51 newest-cni-297942 kubelet[664]: E1126 20:22:51.980296     664 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-297942\" already exists" pod="kube-system/kube-controller-manager-newest-cni-297942"
	Nov 26 20:22:51 newest-cni-297942 kubelet[664]: I1126 20:22:51.980332     664 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-297942"
	Nov 26 20:22:51 newest-cni-297942 kubelet[664]: E1126 20:22:51.989903     664 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-297942\" already exists" pod="kube-system/kube-scheduler-newest-cni-297942"
	Nov 26 20:22:51 newest-cni-297942 kubelet[664]: I1126 20:22:51.990531     664 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-297942"
	Nov 26 20:22:51 newest-cni-297942 kubelet[664]: E1126 20:22:51.997013     664 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-297942\" already exists" pod="kube-system/etcd-newest-cni-297942"
	Nov 26 20:22:52 newest-cni-297942 kubelet[664]: I1126 20:22:52.157123     664 apiserver.go:52] "Watching apiserver"
	Nov 26 20:22:52 newest-cni-297942 kubelet[664]: I1126 20:22:52.162071     664 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 26 20:22:52 newest-cni-297942 kubelet[664]: I1126 20:22:52.202607     664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a6a459a7-87d9-4628-ad09-7e6e8d8445da-xtables-lock\") pod \"kindnet-wlhp7\" (UID: \"a6a459a7-87d9-4628-ad09-7e6e8d8445da\") " pod="kube-system/kindnet-wlhp7"
	Nov 26 20:22:52 newest-cni-297942 kubelet[664]: I1126 20:22:52.202686     664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6e8b3fed-5b44-42fd-9259-f59bfc5b1f0c-xtables-lock\") pod \"kube-proxy-lx6vw\" (UID: \"6e8b3fed-5b44-42fd-9259-f59bfc5b1f0c\") " pod="kube-system/kube-proxy-lx6vw"
	Nov 26 20:22:52 newest-cni-297942 kubelet[664]: I1126 20:22:52.202744     664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e8b3fed-5b44-42fd-9259-f59bfc5b1f0c-lib-modules\") pod \"kube-proxy-lx6vw\" (UID: \"6e8b3fed-5b44-42fd-9259-f59bfc5b1f0c\") " pod="kube-system/kube-proxy-lx6vw"
	Nov 26 20:22:52 newest-cni-297942 kubelet[664]: I1126 20:22:52.202786     664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/a6a459a7-87d9-4628-ad09-7e6e8d8445da-cni-cfg\") pod \"kindnet-wlhp7\" (UID: \"a6a459a7-87d9-4628-ad09-7e6e8d8445da\") " pod="kube-system/kindnet-wlhp7"
	Nov 26 20:22:52 newest-cni-297942 kubelet[664]: I1126 20:22:52.202807     664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a6a459a7-87d9-4628-ad09-7e6e8d8445da-lib-modules\") pod \"kindnet-wlhp7\" (UID: \"a6a459a7-87d9-4628-ad09-7e6e8d8445da\") " pod="kube-system/kindnet-wlhp7"
	Nov 26 20:22:52 newest-cni-297942 kubelet[664]: I1126 20:22:52.243048     664 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-297942"
	Nov 26 20:22:52 newest-cni-297942 kubelet[664]: I1126 20:22:52.243432     664 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-297942"
	Nov 26 20:22:52 newest-cni-297942 kubelet[664]: I1126 20:22:52.243759     664 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-297942"
	Nov 26 20:22:52 newest-cni-297942 kubelet[664]: I1126 20:22:52.244223     664 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-297942"
	Nov 26 20:22:52 newest-cni-297942 kubelet[664]: E1126 20:22:52.258984     664 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-297942\" already exists" pod="kube-system/kube-controller-manager-newest-cni-297942"
	Nov 26 20:22:52 newest-cni-297942 kubelet[664]: E1126 20:22:52.259591     664 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-297942\" already exists" pod="kube-system/kube-scheduler-newest-cni-297942"
	Nov 26 20:22:52 newest-cni-297942 kubelet[664]: E1126 20:22:52.259897     664 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-297942\" already exists" pod="kube-system/etcd-newest-cni-297942"
	Nov 26 20:22:52 newest-cni-297942 kubelet[664]: E1126 20:22:52.260077     664 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-297942\" already exists" pod="kube-system/kube-apiserver-newest-cni-297942"
	Nov 26 20:22:54 newest-cni-297942 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 26 20:22:54 newest-cni-297942 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 26 20:22:54 newest-cni-297942 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-297942 -n newest-cni-297942
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-297942 -n newest-cni-297942: exit status 2 (321.46415ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-297942 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-bnszr storage-provisioner dashboard-metrics-scraper-6ffb444bf9-hw5ql kubernetes-dashboard-855c9754f9-92dlp
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-297942 describe pod coredns-66bc5c9577-bnszr storage-provisioner dashboard-metrics-scraper-6ffb444bf9-hw5ql kubernetes-dashboard-855c9754f9-92dlp
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-297942 describe pod coredns-66bc5c9577-bnszr storage-provisioner dashboard-metrics-scraper-6ffb444bf9-hw5ql kubernetes-dashboard-855c9754f9-92dlp: exit status 1 (58.950287ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-bnszr" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-hw5ql" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-92dlp" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-297942 describe pod coredns-66bc5c9577-bnszr storage-provisioner dashboard-metrics-scraper-6ffb444bf9-hw5ql kubernetes-dashboard-855c9754f9-92dlp: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (6.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (6.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-026579 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-026579 --alsologtostderr -v=1: exit status 80 (2.579857766s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-026579 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1126 20:23:30.917029  295940 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:23:30.917235  295940 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:23:30.917243  295940 out.go:374] Setting ErrFile to fd 2...
	I1126 20:23:30.917246  295940 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:23:30.917430  295940 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-10722/.minikube/bin
	I1126 20:23:30.917671  295940 out.go:368] Setting JSON to false
	I1126 20:23:30.917692  295940 mustload.go:66] Loading cluster: no-preload-026579
	I1126 20:23:30.918139  295940 config.go:182] Loaded profile config "no-preload-026579": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:23:30.918613  295940 cli_runner.go:164] Run: docker container inspect no-preload-026579 --format={{.State.Status}}
	I1126 20:23:30.936533  295940 host.go:66] Checking if "no-preload-026579" exists ...
	I1126 20:23:30.936802  295940 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:23:30.995444  295940 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:87 SystemTime:2025-11-26 20:23:30.985518143 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1126 20:23:30.996190  295940 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-026579 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1126 20:23:30.997636  295940 out.go:179] * Pausing node no-preload-026579 ... 
	I1126 20:23:30.998917  295940 host.go:66] Checking if "no-preload-026579" exists ...
	I1126 20:23:30.999219  295940 ssh_runner.go:195] Run: systemctl --version
	I1126 20:23:30.999277  295940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-026579
	I1126 20:23:31.015813  295940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/no-preload-026579/id_rsa Username:docker}
	I1126 20:23:31.118564  295940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:23:31.131647  295940 pause.go:52] kubelet running: true
	I1126 20:23:31.131719  295940 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1126 20:23:31.287968  295940 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1126 20:23:31.288044  295940 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1126 20:23:31.351333  295940 cri.go:89] found id: "7f8751fb0a65f1680045ae79ecb0521ff69da6a9ef59fff50fa749d20fb9cf69"
	I1126 20:23:31.351358  295940 cri.go:89] found id: "5e01aebce076296d3ed66d96beac70b5dd1097a82f3df65932b28a11c416eb6f"
	I1126 20:23:31.351363  295940 cri.go:89] found id: "3bda7bcb07b58f6c63b7516471b9f44f6a342dfe7e086d6c5c44bb0df06149f9"
	I1126 20:23:31.351366  295940 cri.go:89] found id: "32e35f17feb89664f1fae773b335343aa39c1e0842c3b1752b38b7722a5b602f"
	I1126 20:23:31.351369  295940 cri.go:89] found id: "42828f83720fe35dd743e7a5c065fe0193e00746358e315543e3a492e04cddc2"
	I1126 20:23:31.351380  295940 cri.go:89] found id: "5a4a0e2af1862b25492b690a7a862bef76306a1889835ec8f86fa09057dad6a9"
	I1126 20:23:31.351385  295940 cri.go:89] found id: "bbe6a4946c0088614f8a872edc385ef4bf724c29b1ba7f173936d426a0fc26e3"
	I1126 20:23:31.351389  295940 cri.go:89] found id: "d4451cac813aa54ddc31652afec1d7e8194495b96f8e083719c0cc55f90393e6"
	I1126 20:23:31.351394  295940 cri.go:89] found id: "590f69567c94cb9063adb6b5b5bfedd56c93afe7320fc6431800372b19411ff4"
	I1126 20:23:31.351412  295940 cri.go:89] found id: "d3aa9d6833a17ddfa7604b0ff6d901e9ecd66bef0a9f6abdb2dbe075384fe401"
	I1126 20:23:31.351421  295940 cri.go:89] found id: "2a10859cf5d562e2b2c673c950cfcf54f0d04d81fa0b79c317ba79e924252860"
	I1126 20:23:31.351424  295940 cri.go:89] found id: ""
	I1126 20:23:31.351475  295940 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 20:23:31.362612  295940 retry.go:31] will retry after 336.149058ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:23:31Z" level=error msg="open /run/runc: no such file or directory"
	I1126 20:23:31.699915  295940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:23:31.712345  295940 pause.go:52] kubelet running: false
	I1126 20:23:31.712403  295940 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1126 20:23:31.855953  295940 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1126 20:23:31.856047  295940 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1126 20:23:31.918130  295940 cri.go:89] found id: "7f8751fb0a65f1680045ae79ecb0521ff69da6a9ef59fff50fa749d20fb9cf69"
	I1126 20:23:31.918149  295940 cri.go:89] found id: "5e01aebce076296d3ed66d96beac70b5dd1097a82f3df65932b28a11c416eb6f"
	I1126 20:23:31.918155  295940 cri.go:89] found id: "3bda7bcb07b58f6c63b7516471b9f44f6a342dfe7e086d6c5c44bb0df06149f9"
	I1126 20:23:31.918160  295940 cri.go:89] found id: "32e35f17feb89664f1fae773b335343aa39c1e0842c3b1752b38b7722a5b602f"
	I1126 20:23:31.918163  295940 cri.go:89] found id: "42828f83720fe35dd743e7a5c065fe0193e00746358e315543e3a492e04cddc2"
	I1126 20:23:31.918168  295940 cri.go:89] found id: "5a4a0e2af1862b25492b690a7a862bef76306a1889835ec8f86fa09057dad6a9"
	I1126 20:23:31.918172  295940 cri.go:89] found id: "bbe6a4946c0088614f8a872edc385ef4bf724c29b1ba7f173936d426a0fc26e3"
	I1126 20:23:31.918176  295940 cri.go:89] found id: "d4451cac813aa54ddc31652afec1d7e8194495b96f8e083719c0cc55f90393e6"
	I1126 20:23:31.918188  295940 cri.go:89] found id: "590f69567c94cb9063adb6b5b5bfedd56c93afe7320fc6431800372b19411ff4"
	I1126 20:23:31.918195  295940 cri.go:89] found id: "d3aa9d6833a17ddfa7604b0ff6d901e9ecd66bef0a9f6abdb2dbe075384fe401"
	I1126 20:23:31.918200  295940 cri.go:89] found id: "2a10859cf5d562e2b2c673c950cfcf54f0d04d81fa0b79c317ba79e924252860"
	I1126 20:23:31.918204  295940 cri.go:89] found id: ""
	I1126 20:23:31.918253  295940 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 20:23:31.929433  295940 retry.go:31] will retry after 219.184051ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:23:31Z" level=error msg="open /run/runc: no such file or directory"
	I1126 20:23:32.149781  295940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:23:32.163137  295940 pause.go:52] kubelet running: false
	I1126 20:23:32.163181  295940 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1126 20:23:32.309443  295940 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1126 20:23:32.309537  295940 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1126 20:23:32.372236  295940 cri.go:89] found id: "7f8751fb0a65f1680045ae79ecb0521ff69da6a9ef59fff50fa749d20fb9cf69"
	I1126 20:23:32.372258  295940 cri.go:89] found id: "5e01aebce076296d3ed66d96beac70b5dd1097a82f3df65932b28a11c416eb6f"
	I1126 20:23:32.372264  295940 cri.go:89] found id: "3bda7bcb07b58f6c63b7516471b9f44f6a342dfe7e086d6c5c44bb0df06149f9"
	I1126 20:23:32.372286  295940 cri.go:89] found id: "32e35f17feb89664f1fae773b335343aa39c1e0842c3b1752b38b7722a5b602f"
	I1126 20:23:32.372291  295940 cri.go:89] found id: "42828f83720fe35dd743e7a5c065fe0193e00746358e315543e3a492e04cddc2"
	I1126 20:23:32.372295  295940 cri.go:89] found id: "5a4a0e2af1862b25492b690a7a862bef76306a1889835ec8f86fa09057dad6a9"
	I1126 20:23:32.372299  295940 cri.go:89] found id: "bbe6a4946c0088614f8a872edc385ef4bf724c29b1ba7f173936d426a0fc26e3"
	I1126 20:23:32.372303  295940 cri.go:89] found id: "d4451cac813aa54ddc31652afec1d7e8194495b96f8e083719c0cc55f90393e6"
	I1126 20:23:32.372306  295940 cri.go:89] found id: "590f69567c94cb9063adb6b5b5bfedd56c93afe7320fc6431800372b19411ff4"
	I1126 20:23:32.372317  295940 cri.go:89] found id: "d3aa9d6833a17ddfa7604b0ff6d901e9ecd66bef0a9f6abdb2dbe075384fe401"
	I1126 20:23:32.372324  295940 cri.go:89] found id: "2a10859cf5d562e2b2c673c950cfcf54f0d04d81fa0b79c317ba79e924252860"
	I1126 20:23:32.372328  295940 cri.go:89] found id: ""
	I1126 20:23:32.372372  295940 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 20:23:32.384272  295940 retry.go:31] will retry after 687.6737ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:23:32Z" level=error msg="open /run/runc: no such file or directory"
	I1126 20:23:33.072546  295940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:23:33.108340  295940 pause.go:52] kubelet running: false
	I1126 20:23:33.108396  295940 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1126 20:23:33.332665  295940 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1126 20:23:33.332736  295940 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1126 20:23:33.413754  295940 cri.go:89] found id: "7f8751fb0a65f1680045ae79ecb0521ff69da6a9ef59fff50fa749d20fb9cf69"
	I1126 20:23:33.413772  295940 cri.go:89] found id: "5e01aebce076296d3ed66d96beac70b5dd1097a82f3df65932b28a11c416eb6f"
	I1126 20:23:33.413777  295940 cri.go:89] found id: "3bda7bcb07b58f6c63b7516471b9f44f6a342dfe7e086d6c5c44bb0df06149f9"
	I1126 20:23:33.413780  295940 cri.go:89] found id: "32e35f17feb89664f1fae773b335343aa39c1e0842c3b1752b38b7722a5b602f"
	I1126 20:23:33.413783  295940 cri.go:89] found id: "42828f83720fe35dd743e7a5c065fe0193e00746358e315543e3a492e04cddc2"
	I1126 20:23:33.413786  295940 cri.go:89] found id: "5a4a0e2af1862b25492b690a7a862bef76306a1889835ec8f86fa09057dad6a9"
	I1126 20:23:33.413789  295940 cri.go:89] found id: "bbe6a4946c0088614f8a872edc385ef4bf724c29b1ba7f173936d426a0fc26e3"
	I1126 20:23:33.413791  295940 cri.go:89] found id: "d4451cac813aa54ddc31652afec1d7e8194495b96f8e083719c0cc55f90393e6"
	I1126 20:23:33.413794  295940 cri.go:89] found id: "590f69567c94cb9063adb6b5b5bfedd56c93afe7320fc6431800372b19411ff4"
	I1126 20:23:33.413800  295940 cri.go:89] found id: "d3aa9d6833a17ddfa7604b0ff6d901e9ecd66bef0a9f6abdb2dbe075384fe401"
	I1126 20:23:33.413805  295940 cri.go:89] found id: "2a10859cf5d562e2b2c673c950cfcf54f0d04d81fa0b79c317ba79e924252860"
	I1126 20:23:33.413809  295940 cri.go:89] found id: ""
	I1126 20:23:33.413849  295940 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 20:23:33.429927  295940 out.go:203] 
	W1126 20:23:33.431075  295940 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:23:33Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:23:33Z" level=error msg="open /run/runc: no such file or directory"
	
	W1126 20:23:33.431094  295940 out.go:285] * 
	* 
	W1126 20:23:33.435954  295940 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1126 20:23:33.437194  295940 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-026579 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-026579
helpers_test.go:243: (dbg) docker inspect no-preload-026579:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9844cee89f7d44f0de3996dfc6f6df4e68130d5e18c7a43d8cc08597e0863f32",
	        "Created": "2025-11-26T20:21:13.866220209Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 279357,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-26T20:22:31.957869863Z",
	            "FinishedAt": "2025-11-26T20:22:31.022327472Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/9844cee89f7d44f0de3996dfc6f6df4e68130d5e18c7a43d8cc08597e0863f32/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9844cee89f7d44f0de3996dfc6f6df4e68130d5e18c7a43d8cc08597e0863f32/hostname",
	        "HostsPath": "/var/lib/docker/containers/9844cee89f7d44f0de3996dfc6f6df4e68130d5e18c7a43d8cc08597e0863f32/hosts",
	        "LogPath": "/var/lib/docker/containers/9844cee89f7d44f0de3996dfc6f6df4e68130d5e18c7a43d8cc08597e0863f32/9844cee89f7d44f0de3996dfc6f6df4e68130d5e18c7a43d8cc08597e0863f32-json.log",
	        "Name": "/no-preload-026579",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-026579:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-026579",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9844cee89f7d44f0de3996dfc6f6df4e68130d5e18c7a43d8cc08597e0863f32",
	                "LowerDir": "/var/lib/docker/overlay2/ba0da6391fe48e2d4ac14de16184303d8e1d2450a851e687b5256f0e38c43759-init/diff:/var/lib/docker/overlay2/1661d775281cb7314a654558ee2c4e5880ffafb3a27a085032449cd60a68753a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ba0da6391fe48e2d4ac14de16184303d8e1d2450a851e687b5256f0e38c43759/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ba0da6391fe48e2d4ac14de16184303d8e1d2450a851e687b5256f0e38c43759/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ba0da6391fe48e2d4ac14de16184303d8e1d2450a851e687b5256f0e38c43759/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-026579",
	                "Source": "/var/lib/docker/volumes/no-preload-026579/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-026579",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-026579",
	                "name.minikube.sigs.k8s.io": "no-preload-026579",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "a2277f55c8ef1902ede7e3f4b3d395a458080b11744e33079f1231a9528121fe",
	            "SandboxKey": "/var/run/docker/netns/a2277f55c8ef",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-026579": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2ae6f13df7ae90e563079e045184e161803a9312deeafb40deb6a3cda467fd0e",
	                    "EndpointID": "c2bae13b0e015af82c493bb224fd936eb5ba6d97dac61b1b4af008a630c15558",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "76:0a:e1:f1:ed:2e",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-026579",
	                        "9844cee89f7d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-026579 -n no-preload-026579
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-026579 -n no-preload-026579: exit status 2 (364.29124ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-026579 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-026579 logs -n 25: (1.069582677s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p default-k8s-diff-port-178152 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-178152 │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ start   │ -p newest-cni-297942 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-297942            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ addons  │ enable metrics-server -p no-preload-026579 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-026579            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │                     │
	│ stop    │ -p no-preload-026579 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-026579            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ addons  │ enable metrics-server -p embed-certs-949294 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-949294           │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │                     │
	│ stop    │ -p embed-certs-949294 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-949294           │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ addons  │ enable dashboard -p no-preload-026579 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-026579            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ start   │ -p no-preload-026579 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-026579            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:23 UTC │
	│ addons  │ enable metrics-server -p newest-cni-297942 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-297942            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-949294 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-949294           │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ start   │ -p embed-certs-949294 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-949294           │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:23 UTC │
	│ stop    │ -p newest-cni-297942 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-297942            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ addons  │ enable dashboard -p newest-cni-297942 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-297942            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ start   │ -p newest-cni-297942 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-297942            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-178152 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-178152 │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │                     │
	│ image   │ newest-cni-297942 image list --format=json                                                                                                                                                                                                    │ newest-cni-297942            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ pause   │ -p newest-cni-297942 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-297942            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-178152 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-178152 │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:23 UTC │
	│ delete  │ -p newest-cni-297942                                                                                                                                                                                                                          │ newest-cni-297942            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:23 UTC │
	│ delete  │ -p newest-cni-297942                                                                                                                                                                                                                          │ newest-cni-297942            │ jenkins │ v1.37.0 │ 26 Nov 25 20:23 UTC │ 26 Nov 25 20:23 UTC │
	│ start   │ -p auto-825702 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-825702                  │ jenkins │ v1.37.0 │ 26 Nov 25 20:23 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-178152 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-178152 │ jenkins │ v1.37.0 │ 26 Nov 25 20:23 UTC │ 26 Nov 25 20:23 UTC │
	│ start   │ -p default-k8s-diff-port-178152 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-178152 │ jenkins │ v1.37.0 │ 26 Nov 25 20:23 UTC │                     │
	│ image   │ no-preload-026579 image list --format=json                                                                                                                                                                                                    │ no-preload-026579            │ jenkins │ v1.37.0 │ 26 Nov 25 20:23 UTC │ 26 Nov 25 20:23 UTC │
	│ pause   │ -p no-preload-026579 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-026579            │ jenkins │ v1.37.0 │ 26 Nov 25 20:23 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/26 20:23:11
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1126 20:23:11.440383  292013 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:23:11.440496  292013 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:23:11.440508  292013 out.go:374] Setting ErrFile to fd 2...
	I1126 20:23:11.440515  292013 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:23:11.440723  292013 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-10722/.minikube/bin
	I1126 20:23:11.441107  292013 out.go:368] Setting JSON to false
	I1126 20:23:11.442313  292013 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3941,"bootTime":1764184650,"procs":337,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1126 20:23:11.442360  292013 start.go:143] virtualization: kvm guest
	I1126 20:23:11.444216  292013 out.go:179] * [default-k8s-diff-port-178152] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1126 20:23:11.445318  292013 out.go:179]   - MINIKUBE_LOCATION=21974
	I1126 20:23:11.445306  292013 notify.go:221] Checking for updates...
	I1126 20:23:11.446480  292013 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1126 20:23:11.447697  292013 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21974-10722/kubeconfig
	I1126 20:23:11.448830  292013 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-10722/.minikube
	I1126 20:23:11.449874  292013 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1126 20:23:11.450880  292013 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1126 20:23:11.452236  292013 config.go:182] Loaded profile config "default-k8s-diff-port-178152": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:23:11.452747  292013 driver.go:422] Setting default libvirt URI to qemu:///system
	I1126 20:23:11.477152  292013 docker.go:124] docker version: linux-29.0.4:Docker Engine - Community
	I1126 20:23:11.477223  292013 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:23:11.530562  292013 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-26 20:23:11.521081776 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1126 20:23:11.530668  292013 docker.go:319] overlay module found
	I1126 20:23:11.532344  292013 out.go:179] * Using the docker driver based on existing profile
	I1126 20:23:11.533553  292013 start.go:309] selected driver: docker
	I1126 20:23:11.533572  292013 start.go:927] validating driver "docker" against &{Name:default-k8s-diff-port-178152 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-178152 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:23:11.533665  292013 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1126 20:23:11.534315  292013 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:23:11.590661  292013 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-26 20:23:11.581789918 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1126 20:23:11.590918  292013 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1126 20:23:11.590946  292013 cni.go:84] Creating CNI manager for ""
	I1126 20:23:11.590995  292013 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:23:11.591030  292013 start.go:353] cluster config:
	{Name:default-k8s-diff-port-178152 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-178152 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:23:11.592614  292013 out.go:179] * Starting "default-k8s-diff-port-178152" primary control-plane node in "default-k8s-diff-port-178152" cluster
	I1126 20:23:11.593974  292013 cache.go:134] Beginning downloading kic base image for docker with crio
	I1126 20:23:11.595037  292013 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1126 20:23:11.596046  292013 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:23:11.596075  292013 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21974-10722/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1126 20:23:11.596085  292013 cache.go:65] Caching tarball of preloaded images
	I1126 20:23:11.596139  292013 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1126 20:23:11.596167  292013 preload.go:238] Found /home/jenkins/minikube-integration/21974-10722/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1126 20:23:11.596174  292013 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1126 20:23:11.596261  292013 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/default-k8s-diff-port-178152/config.json ...
	I1126 20:23:11.615795  292013 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1126 20:23:11.615813  292013 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1126 20:23:11.615829  292013 cache.go:243] Successfully downloaded all kic artifacts
	I1126 20:23:11.615858  292013 start.go:360] acquireMachinesLock for default-k8s-diff-port-178152: {Name:mk205db4bd139b8853f3d786653274635beb61e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1126 20:23:11.615920  292013 start.go:364] duration metric: took 34.361µs to acquireMachinesLock for "default-k8s-diff-port-178152"
	I1126 20:23:11.615936  292013 start.go:96] Skipping create...Using existing machine configuration
	I1126 20:23:11.615941  292013 fix.go:54] fixHost starting: 
	I1126 20:23:11.616144  292013 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-178152 --format={{.State.Status}}
	I1126 20:23:11.633041  292013 fix.go:112] recreateIfNeeded on default-k8s-diff-port-178152: state=Stopped err=<nil>
	W1126 20:23:11.633069  292013 fix.go:138] unexpected machine state, will restart: <nil>
	W1126 20:23:08.695965  279050 pod_ready.go:104] pod "coredns-66bc5c9577-wl4xp" is not "Ready", error: <nil>
	W1126 20:23:11.193321  279050 pod_ready.go:104] pod "coredns-66bc5c9577-wl4xp" is not "Ready", error: <nil>
	W1126 20:23:09.709550  281230 pod_ready.go:104] pod "coredns-66bc5c9577-s8rrr" is not "Ready", error: <nil>
	W1126 20:23:11.709818  281230 pod_ready.go:104] pod "coredns-66bc5c9577-s8rrr" is not "Ready", error: <nil>
	I1126 20:23:08.134694  290654 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-825702 --name auto-825702 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-825702 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-825702 --network auto-825702 --ip 192.168.103.2 --volume auto-825702:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1126 20:23:08.459052  290654 cli_runner.go:164] Run: docker container inspect auto-825702 --format={{.State.Running}}
	I1126 20:23:08.476518  290654 cli_runner.go:164] Run: docker container inspect auto-825702 --format={{.State.Status}}
	I1126 20:23:08.493237  290654 cli_runner.go:164] Run: docker exec auto-825702 stat /var/lib/dpkg/alternatives/iptables
	I1126 20:23:08.540339  290654 oci.go:144] the created container "auto-825702" has a running status.
	I1126 20:23:08.540374  290654 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21974-10722/.minikube/machines/auto-825702/id_rsa...
	I1126 20:23:08.625248  290654 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21974-10722/.minikube/machines/auto-825702/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1126 20:23:08.653620  290654 cli_runner.go:164] Run: docker container inspect auto-825702 --format={{.State.Status}}
	I1126 20:23:08.671280  290654 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1126 20:23:08.671296  290654 kic_runner.go:114] Args: [docker exec --privileged auto-825702 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1126 20:23:08.732039  290654 cli_runner.go:164] Run: docker container inspect auto-825702 --format={{.State.Status}}
	I1126 20:23:08.755179  290654 machine.go:94] provisionDockerMachine start ...
	I1126 20:23:08.755285  290654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-825702
	I1126 20:23:08.780893  290654 main.go:143] libmachine: Using SSH client type: native
	I1126 20:23:08.781238  290654 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1126 20:23:08.781257  290654 main.go:143] libmachine: About to run SSH command:
	hostname
	I1126 20:23:08.782168  290654 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47660->127.0.0.1:33098: read: connection reset by peer
	I1126 20:23:11.933816  290654 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-825702
	
	I1126 20:23:11.933845  290654 ubuntu.go:182] provisioning hostname "auto-825702"
	I1126 20:23:11.933942  290654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-825702
	I1126 20:23:11.955152  290654 main.go:143] libmachine: Using SSH client type: native
	I1126 20:23:11.955427  290654 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1126 20:23:11.955445  290654 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-825702 && echo "auto-825702" | sudo tee /etc/hostname
	I1126 20:23:12.106616  290654 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-825702
	
	I1126 20:23:12.106688  290654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-825702
	I1126 20:23:12.126835  290654 main.go:143] libmachine: Using SSH client type: native
	I1126 20:23:12.127147  290654 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1126 20:23:12.127173  290654 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-825702' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-825702/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-825702' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1126 20:23:12.277739  290654 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1126 20:23:12.277766  290654 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21974-10722/.minikube CaCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21974-10722/.minikube}
	I1126 20:23:12.277789  290654 ubuntu.go:190] setting up certificates
	I1126 20:23:12.277804  290654 provision.go:84] configureAuth start
	I1126 20:23:12.277864  290654 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-825702
	I1126 20:23:12.295168  290654 provision.go:143] copyHostCerts
	I1126 20:23:12.295223  290654 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-10722/.minikube/ca.pem, removing ...
	I1126 20:23:12.295236  290654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-10722/.minikube/ca.pem
	I1126 20:23:12.295296  290654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/ca.pem (1078 bytes)
	I1126 20:23:12.295381  290654 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-10722/.minikube/cert.pem, removing ...
	I1126 20:23:12.295390  290654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-10722/.minikube/cert.pem
	I1126 20:23:12.295415  290654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/cert.pem (1123 bytes)
	I1126 20:23:12.295497  290654 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-10722/.minikube/key.pem, removing ...
	I1126 20:23:12.295506  290654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-10722/.minikube/key.pem
	I1126 20:23:12.295534  290654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/key.pem (1675 bytes)
	I1126 20:23:12.295591  290654 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem org=jenkins.auto-825702 san=[127.0.0.1 192.168.103.2 auto-825702 localhost minikube]
	I1126 20:23:12.321795  290654 provision.go:177] copyRemoteCerts
	I1126 20:23:12.321839  290654 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1126 20:23:12.321870  290654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-825702
	I1126 20:23:12.339200  290654 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/auto-825702/id_rsa Username:docker}
	I1126 20:23:12.437185  290654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1126 20:23:12.456201  290654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1126 20:23:12.472910  290654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1126 20:23:12.489243  290654 provision.go:87] duration metric: took 211.42653ms to configureAuth
	I1126 20:23:12.489265  290654 ubuntu.go:206] setting minikube options for container-runtime
	I1126 20:23:12.489416  290654 config.go:182] Loaded profile config "auto-825702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:23:12.489511  290654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-825702
	I1126 20:23:12.507582  290654 main.go:143] libmachine: Using SSH client type: native
	I1126 20:23:12.507780  290654 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1126 20:23:12.507796  290654 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1126 20:23:12.781449  290654 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1126 20:23:12.781491  290654 machine.go:97] duration metric: took 4.026285211s to provisionDockerMachine
	I1126 20:23:12.781503  290654 client.go:176] duration metric: took 9.486657251s to LocalClient.Create
	I1126 20:23:12.781520  290654 start.go:167] duration metric: took 9.48674154s to libmachine.API.Create "auto-825702"
	I1126 20:23:12.781527  290654 start.go:293] postStartSetup for "auto-825702" (driver="docker")
	I1126 20:23:12.781535  290654 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1126 20:23:12.781581  290654 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1126 20:23:12.781622  290654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-825702
	I1126 20:23:12.801338  290654 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/auto-825702/id_rsa Username:docker}
	I1126 20:23:12.900997  290654 ssh_runner.go:195] Run: cat /etc/os-release
	I1126 20:23:12.904439  290654 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1126 20:23:12.904478  290654 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1126 20:23:12.904490  290654 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-10722/.minikube/addons for local assets ...
	I1126 20:23:12.904539  290654 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-10722/.minikube/files for local assets ...
	I1126 20:23:12.904630  290654 filesync.go:149] local asset: /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem -> 142582.pem in /etc/ssl/certs
	I1126 20:23:12.904740  290654 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1126 20:23:12.912016  290654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem --> /etc/ssl/certs/142582.pem (1708 bytes)
	I1126 20:23:12.931277  290654 start.go:296] duration metric: took 149.73924ms for postStartSetup
	I1126 20:23:12.931620  290654 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-825702
	I1126 20:23:12.948897  290654 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/config.json ...
	I1126 20:23:12.949153  290654 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1126 20:23:12.949198  290654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-825702
	I1126 20:23:12.966056  290654 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/auto-825702/id_rsa Username:docker}
	I1126 20:23:13.061265  290654 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1126 20:23:13.065549  290654 start.go:128] duration metric: took 9.77265288s to createHost
	I1126 20:23:13.065569  290654 start.go:83] releasing machines lock for "auto-825702", held for 9.772807938s
	I1126 20:23:13.065624  290654 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-825702
	I1126 20:23:13.082987  290654 ssh_runner.go:195] Run: cat /version.json
	I1126 20:23:13.083045  290654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-825702
	I1126 20:23:13.083065  290654 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1126 20:23:13.083125  290654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-825702
	I1126 20:23:13.101098  290654 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/auto-825702/id_rsa Username:docker}
	I1126 20:23:13.101658  290654 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/auto-825702/id_rsa Username:docker}
	I1126 20:23:13.248108  290654 ssh_runner.go:195] Run: systemctl --version
	I1126 20:23:13.254244  290654 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1126 20:23:13.288072  290654 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1126 20:23:13.292438  290654 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1126 20:23:13.292520  290654 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1126 20:23:13.317258  290654 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1126 20:23:13.317277  290654 start.go:496] detecting cgroup driver to use...
	I1126 20:23:13.317301  290654 detect.go:190] detected "systemd" cgroup driver on host os
	I1126 20:23:13.317343  290654 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1126 20:23:13.332701  290654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1126 20:23:13.343996  290654 docker.go:218] disabling cri-docker service (if available) ...
	I1126 20:23:13.344063  290654 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1126 20:23:13.359920  290654 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1126 20:23:13.376200  290654 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1126 20:23:13.458202  290654 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1126 20:23:13.545058  290654 docker.go:234] disabling docker service ...
	I1126 20:23:13.545125  290654 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1126 20:23:13.563618  290654 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1126 20:23:13.575589  290654 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1126 20:23:13.659232  290654 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1126 20:23:13.741598  290654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1126 20:23:13.753230  290654 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1126 20:23:13.766347  290654 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1126 20:23:13.766400  290654 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:23:13.776320  290654 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1126 20:23:13.776363  290654 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:23:13.785041  290654 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:23:13.792995  290654 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:23:13.801178  290654 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1126 20:23:13.808838  290654 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:23:13.817198  290654 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:23:13.829677  290654 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:23:13.837756  290654 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1126 20:23:13.844718  290654 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1126 20:23:13.851623  290654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:23:13.929048  290654 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1126 20:23:14.058401  290654 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1126 20:23:14.058487  290654 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1126 20:23:14.062290  290654 start.go:564] Will wait 60s for crictl version
	I1126 20:23:14.062353  290654 ssh_runner.go:195] Run: which crictl
	I1126 20:23:14.065660  290654 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1126 20:23:14.091120  290654 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1126 20:23:14.091210  290654 ssh_runner.go:195] Run: crio --version
	I1126 20:23:14.117155  290654 ssh_runner.go:195] Run: crio --version
	I1126 20:23:14.145211  290654 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1126 20:23:14.146347  290654 cli_runner.go:164] Run: docker network inspect auto-825702 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1126 20:23:14.163312  290654 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1126 20:23:14.167143  290654 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:23:14.176842  290654 kubeadm.go:884] updating cluster {Name:auto-825702 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-825702 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1126 20:23:14.176954  290654 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:23:14.177008  290654 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:23:14.209406  290654 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:23:14.209426  290654 crio.go:433] Images already preloaded, skipping extraction
	I1126 20:23:14.209480  290654 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:23:14.233034  290654 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:23:14.233054  290654 cache_images.go:86] Images are preloaded, skipping loading
	I1126 20:23:14.233064  290654 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1126 20:23:14.233167  290654 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-825702 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-825702 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1126 20:23:14.233234  290654 ssh_runner.go:195] Run: crio config
	I1126 20:23:14.277192  290654 cni.go:84] Creating CNI manager for ""
	I1126 20:23:14.277214  290654 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:23:14.277232  290654 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1126 20:23:14.277262  290654 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-825702 NodeName:auto-825702 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1126 20:23:14.277404  290654 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-825702"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1126 20:23:14.277482  290654 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1126 20:23:14.285340  290654 binaries.go:51] Found k8s binaries, skipping transfer
	I1126 20:23:14.285386  290654 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1126 20:23:14.292836  290654 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1126 20:23:14.304841  290654 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1126 20:23:14.319148  290654 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1126 20:23:14.330598  290654 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1126 20:23:14.333692  290654 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:23:14.342648  290654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:23:14.418860  290654 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:23:14.441407  290654 certs.go:69] Setting up /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702 for IP: 192.168.103.2
	I1126 20:23:14.441425  290654 certs.go:195] generating shared ca certs ...
	I1126 20:23:14.441445  290654 certs.go:227] acquiring lock for ca certs: {Name:mk4795cc985d88cb969cdd6a9c35d3c72f02dfea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:23:14.441599  290654 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21974-10722/.minikube/ca.key
	I1126 20:23:14.441660  290654 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.key
	I1126 20:23:14.441675  290654 certs.go:257] generating profile certs ...
	I1126 20:23:14.441739  290654 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/client.key
	I1126 20:23:14.441756  290654 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/client.crt with IP's: []
	I1126 20:23:14.561248  290654 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/client.crt ...
	I1126 20:23:14.561273  290654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/client.crt: {Name:mka78bb7cd65f448b3a66a8ed3242d744cbd3ef3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:23:14.561443  290654 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/client.key ...
	I1126 20:23:14.561471  290654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/client.key: {Name:mk7e6b179f66f415078976ea7604686ca387360b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:23:14.561580  290654 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/apiserver.key.a3b7f4ac
	I1126 20:23:14.561598  290654 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/apiserver.crt.a3b7f4ac with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1126 20:23:14.653268  290654 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/apiserver.crt.a3b7f4ac ...
	I1126 20:23:14.653291  290654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/apiserver.crt.a3b7f4ac: {Name:mk19073e3da57c61475b1d8ab67fc8245bda1990 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:23:14.653426  290654 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/apiserver.key.a3b7f4ac ...
	I1126 20:23:14.653442  290654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/apiserver.key.a3b7f4ac: {Name:mk442d2e6204a99840a9704e9c26d0fbee8bfeb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:23:14.653547  290654 certs.go:382] copying /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/apiserver.crt.a3b7f4ac -> /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/apiserver.crt
	I1126 20:23:14.653646  290654 certs.go:386] copying /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/apiserver.key.a3b7f4ac -> /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/apiserver.key
	I1126 20:23:14.653728  290654 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/proxy-client.key
	I1126 20:23:14.653748  290654 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/proxy-client.crt with IP's: []
	I1126 20:23:14.813410  290654 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/proxy-client.crt ...
	I1126 20:23:14.813435  290654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/proxy-client.crt: {Name:mkde4786d8d21ddb4efdf9613c2ade685abc5c74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:23:14.813610  290654 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/proxy-client.key ...
	I1126 20:23:14.813627  290654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/proxy-client.key: {Name:mkd76ddd51996d4102db39f9558a24d218af9bc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:23:14.813815  290654 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/14258.pem (1338 bytes)
	W1126 20:23:14.813862  290654 certs.go:480] ignoring /home/jenkins/minikube-integration/21974-10722/.minikube/certs/14258_empty.pem, impossibly tiny 0 bytes
	I1126 20:23:14.813874  290654 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem (1679 bytes)
	I1126 20:23:14.813912  290654 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem (1078 bytes)
	I1126 20:23:14.813952  290654 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem (1123 bytes)
	I1126 20:23:14.814033  290654 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem (1675 bytes)
	I1126 20:23:14.814101  290654 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem (1708 bytes)
	I1126 20:23:14.814651  290654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1126 20:23:14.832871  290654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1126 20:23:14.849984  290654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1126 20:23:14.866732  290654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1126 20:23:14.882934  290654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1126 20:23:14.899525  290654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1126 20:23:14.915794  290654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1126 20:23:14.931980  290654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1126 20:23:14.947744  290654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/certs/14258.pem --> /usr/share/ca-certificates/14258.pem (1338 bytes)
	I1126 20:23:14.965250  290654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem --> /usr/share/ca-certificates/142582.pem (1708 bytes)
	I1126 20:23:14.981169  290654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1126 20:23:14.997436  290654 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1126 20:23:15.009230  290654 ssh_runner.go:195] Run: openssl version
	I1126 20:23:15.015235  290654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14258.pem && ln -fs /usr/share/ca-certificates/14258.pem /etc/ssl/certs/14258.pem"
	I1126 20:23:15.022951  290654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14258.pem
	I1126 20:23:15.026235  290654 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 26 19:41 /usr/share/ca-certificates/14258.pem
	I1126 20:23:15.026277  290654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14258.pem
	I1126 20:23:15.060428  290654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14258.pem /etc/ssl/certs/51391683.0"
	I1126 20:23:15.068547  290654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142582.pem && ln -fs /usr/share/ca-certificates/142582.pem /etc/ssl/certs/142582.pem"
	I1126 20:23:15.076572  290654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142582.pem
	I1126 20:23:15.080134  290654 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 26 19:41 /usr/share/ca-certificates/142582.pem
	I1126 20:23:15.080168  290654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142582.pem
	I1126 20:23:15.113444  290654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142582.pem /etc/ssl/certs/3ec20f2e.0"
	I1126 20:23:15.121470  290654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1126 20:23:15.129887  290654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:23:15.133669  290654 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 26 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:23:15.133717  290654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:23:15.170371  290654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1126 20:23:15.178639  290654 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1126 20:23:15.182230  290654 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1126 20:23:15.182288  290654 kubeadm.go:401] StartCluster: {Name:auto-825702 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-825702 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetC
lientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:23:15.182372  290654 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 20:23:15.182417  290654 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 20:23:15.213578  290654 cri.go:89] found id: ""
	I1126 20:23:15.213641  290654 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1126 20:23:15.221802  290654 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1126 20:23:15.229340  290654 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1126 20:23:15.229390  290654 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1126 20:23:15.236999  290654 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1126 20:23:15.237013  290654 kubeadm.go:158] found existing configuration files:
	
	I1126 20:23:15.237046  290654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1126 20:23:15.243974  290654 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1126 20:23:15.244013  290654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1126 20:23:15.250710  290654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1126 20:23:15.257608  290654 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1126 20:23:15.257644  290654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1126 20:23:15.264135  290654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1126 20:23:15.270882  290654 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1126 20:23:15.270929  290654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1126 20:23:15.277491  290654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1126 20:23:15.284739  290654 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1126 20:23:15.284784  290654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1126 20:23:15.292713  290654 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1126 20:23:15.331393  290654 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1126 20:23:15.331452  290654 kubeadm.go:319] [preflight] Running pre-flight checks
	I1126 20:23:15.349760  290654 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1126 20:23:15.349861  290654 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1126 20:23:15.349935  290654 kubeadm.go:319] OS: Linux
	I1126 20:23:15.350004  290654 kubeadm.go:319] CGROUPS_CPU: enabled
	I1126 20:23:15.350083  290654 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1126 20:23:15.350164  290654 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1126 20:23:15.350237  290654 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1126 20:23:15.350299  290654 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1126 20:23:15.350384  290654 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1126 20:23:15.350446  290654 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1126 20:23:15.350520  290654 kubeadm.go:319] CGROUPS_IO: enabled
	I1126 20:23:15.411648  290654 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1126 20:23:15.411792  290654 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1126 20:23:15.411920  290654 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1126 20:23:15.418763  290654 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1126 20:23:11.634593  292013 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-178152" ...
	I1126 20:23:11.634649  292013 cli_runner.go:164] Run: docker start default-k8s-diff-port-178152
	I1126 20:23:11.926041  292013 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-178152 --format={{.State.Status}}
	I1126 20:23:11.945873  292013 kic.go:430] container "default-k8s-diff-port-178152" state is running.
	I1126 20:23:11.946183  292013 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-178152
	I1126 20:23:11.965407  292013 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/default-k8s-diff-port-178152/config.json ...
	I1126 20:23:11.965672  292013 machine.go:94] provisionDockerMachine start ...
	I1126 20:23:11.965754  292013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-178152
	I1126 20:23:11.984253  292013 main.go:143] libmachine: Using SSH client type: native
	I1126 20:23:11.984606  292013 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1126 20:23:11.984627  292013 main.go:143] libmachine: About to run SSH command:
	hostname
	I1126 20:23:11.985310  292013 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53020->127.0.0.1:33103: read: connection reset by peer
	I1126 20:23:15.122812  292013 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-178152
	
	I1126 20:23:15.122840  292013 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-178152"
	I1126 20:23:15.122905  292013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-178152
	I1126 20:23:15.141545  292013 main.go:143] libmachine: Using SSH client type: native
	I1126 20:23:15.141743  292013 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1126 20:23:15.141756  292013 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-178152 && echo "default-k8s-diff-port-178152" | sudo tee /etc/hostname
	I1126 20:23:15.288999  292013 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-178152
	
	I1126 20:23:15.289074  292013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-178152
	I1126 20:23:15.307961  292013 main.go:143] libmachine: Using SSH client type: native
	I1126 20:23:15.308207  292013 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1126 20:23:15.308232  292013 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-178152' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-178152/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-178152' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1126 20:23:15.447684  292013 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1126 20:23:15.447708  292013 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21974-10722/.minikube CaCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21974-10722/.minikube}
	I1126 20:23:15.447742  292013 ubuntu.go:190] setting up certificates
	I1126 20:23:15.447753  292013 provision.go:84] configureAuth start
	I1126 20:23:15.447805  292013 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-178152
	I1126 20:23:15.466227  292013 provision.go:143] copyHostCerts
	I1126 20:23:15.466276  292013 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-10722/.minikube/ca.pem, removing ...
	I1126 20:23:15.466286  292013 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-10722/.minikube/ca.pem
	I1126 20:23:15.466350  292013 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/ca.pem (1078 bytes)
	I1126 20:23:15.466445  292013 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-10722/.minikube/cert.pem, removing ...
	I1126 20:23:15.466454  292013 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-10722/.minikube/cert.pem
	I1126 20:23:15.466520  292013 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/cert.pem (1123 bytes)
	I1126 20:23:15.466598  292013 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-10722/.minikube/key.pem, removing ...
	I1126 20:23:15.466607  292013 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-10722/.minikube/key.pem
	I1126 20:23:15.466632  292013 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/key.pem (1675 bytes)
	I1126 20:23:15.466694  292013 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-178152 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-178152 localhost minikube]
	I1126 20:23:15.723525  292013 provision.go:177] copyRemoteCerts
	I1126 20:23:15.723583  292013 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1126 20:23:15.723615  292013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-178152
	I1126 20:23:15.741675  292013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/default-k8s-diff-port-178152/id_rsa Username:docker}
	I1126 20:23:15.840142  292013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1126 20:23:15.856793  292013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1126 20:23:15.872789  292013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1126 20:23:15.889502  292013 provision.go:87] duration metric: took 441.73745ms to configureAuth
	I1126 20:23:15.889527  292013 ubuntu.go:206] setting minikube options for container-runtime
	I1126 20:23:15.889739  292013 config.go:182] Loaded profile config "default-k8s-diff-port-178152": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:23:15.889861  292013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-178152
	I1126 20:23:15.909189  292013 main.go:143] libmachine: Using SSH client type: native
	I1126 20:23:15.909493  292013 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1126 20:23:15.909522  292013 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1126 20:23:16.239537  292013 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1126 20:23:16.239562  292013 machine.go:97] duration metric: took 4.273873255s to provisionDockerMachine
	I1126 20:23:16.239577  292013 start.go:293] postStartSetup for "default-k8s-diff-port-178152" (driver="docker")
	I1126 20:23:16.239591  292013 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1126 20:23:16.239682  292013 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1126 20:23:16.239737  292013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-178152
	I1126 20:23:16.260385  292013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/default-k8s-diff-port-178152/id_rsa Username:docker}
	I1126 20:23:16.358126  292013 ssh_runner.go:195] Run: cat /etc/os-release
	I1126 20:23:16.361405  292013 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1126 20:23:16.361440  292013 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1126 20:23:16.361451  292013 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-10722/.minikube/addons for local assets ...
	I1126 20:23:16.361509  292013 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-10722/.minikube/files for local assets ...
	I1126 20:23:16.361599  292013 filesync.go:149] local asset: /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem -> 142582.pem in /etc/ssl/certs
	I1126 20:23:16.361707  292013 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1126 20:23:16.369023  292013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem --> /etc/ssl/certs/142582.pem (1708 bytes)
	I1126 20:23:16.385925  292013 start.go:296] duration metric: took 146.337148ms for postStartSetup
	I1126 20:23:16.385989  292013 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1126 20:23:16.386031  292013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-178152
	I1126 20:23:16.405445  292013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/default-k8s-diff-port-178152/id_rsa Username:docker}
	W1126 20:23:13.193401  279050 pod_ready.go:104] pod "coredns-66bc5c9577-wl4xp" is not "Ready", error: <nil>
	W1126 20:23:15.194538  279050 pod_ready.go:104] pod "coredns-66bc5c9577-wl4xp" is not "Ready", error: <nil>
	I1126 20:23:16.502288  292013 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1126 20:23:16.506679  292013 fix.go:56] duration metric: took 4.890731938s for fixHost
	I1126 20:23:16.506702  292013 start.go:83] releasing machines lock for "default-k8s-diff-port-178152", held for 4.890770543s
	I1126 20:23:16.506772  292013 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-178152
	I1126 20:23:16.524986  292013 ssh_runner.go:195] Run: cat /version.json
	I1126 20:23:16.525024  292013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-178152
	I1126 20:23:16.525076  292013 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1126 20:23:16.525147  292013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-178152
	I1126 20:23:16.543349  292013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/default-k8s-diff-port-178152/id_rsa Username:docker}
	I1126 20:23:16.544787  292013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/default-k8s-diff-port-178152/id_rsa Username:docker}
	I1126 20:23:16.710155  292013 ssh_runner.go:195] Run: systemctl --version
	I1126 20:23:16.717137  292013 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1126 20:23:16.751512  292013 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1126 20:23:16.755985  292013 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1126 20:23:16.756075  292013 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1126 20:23:16.763515  292013 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1126 20:23:16.763533  292013 start.go:496] detecting cgroup driver to use...
	I1126 20:23:16.763556  292013 detect.go:190] detected "systemd" cgroup driver on host os
	I1126 20:23:16.763596  292013 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1126 20:23:16.777637  292013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1126 20:23:16.789084  292013 docker.go:218] disabling cri-docker service (if available) ...
	I1126 20:23:16.789130  292013 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1126 20:23:16.802415  292013 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1126 20:23:16.814305  292013 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1126 20:23:16.894876  292013 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1126 20:23:16.973549  292013 docker.go:234] disabling docker service ...
	I1126 20:23:16.973602  292013 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1126 20:23:16.987105  292013 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1126 20:23:16.998823  292013 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1126 20:23:17.079192  292013 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1126 20:23:17.154663  292013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1126 20:23:17.166248  292013 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1126 20:23:17.179608  292013 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1126 20:23:17.179659  292013 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:23:17.187979  292013 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1126 20:23:17.188022  292013 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:23:17.197441  292013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:23:17.205620  292013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:23:17.214614  292013 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1126 20:23:17.222358  292013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:23:17.230646  292013 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:23:17.238512  292013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:23:17.246532  292013 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1126 20:23:17.253262  292013 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1126 20:23:17.260346  292013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:23:17.336611  292013 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1126 20:23:17.482298  292013 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1126 20:23:17.482365  292013 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1126 20:23:17.487204  292013 start.go:564] Will wait 60s for crictl version
	I1126 20:23:17.487266  292013 ssh_runner.go:195] Run: which crictl
	I1126 20:23:17.490714  292013 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1126 20:23:17.516962  292013 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1126 20:23:17.517029  292013 ssh_runner.go:195] Run: crio --version
	I1126 20:23:17.546625  292013 ssh_runner.go:195] Run: crio --version
	I1126 20:23:17.576514  292013 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	W1126 20:23:14.209234  281230 pod_ready.go:104] pod "coredns-66bc5c9577-s8rrr" is not "Ready", error: <nil>
	W1126 20:23:16.209528  281230 pod_ready.go:104] pod "coredns-66bc5c9577-s8rrr" is not "Ready", error: <nil>
	I1126 20:23:15.421393  290654 out.go:252]   - Generating certificates and keys ...
	I1126 20:23:15.421469  290654 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1126 20:23:15.421584  290654 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1126 20:23:15.901705  290654 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1126 20:23:16.198158  290654 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1126 20:23:16.755333  290654 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1126 20:23:16.910521  290654 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1126 20:23:17.293843  290654 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1126 20:23:17.294078  290654 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-825702 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1126 20:23:18.053504  290654 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1126 20:23:18.053707  290654 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-825702 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1126 20:23:17.577646  292013 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-178152 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1126 20:23:17.596268  292013 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1126 20:23:17.600505  292013 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:23:17.610494  292013 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-178152 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-178152 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1126 20:23:17.610599  292013 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:23:17.610638  292013 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:23:17.642078  292013 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:23:17.642098  292013 crio.go:433] Images already preloaded, skipping extraction
	I1126 20:23:17.642144  292013 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:23:17.668002  292013 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:23:17.668024  292013 cache_images.go:86] Images are preloaded, skipping loading
	I1126 20:23:17.668033  292013 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1126 20:23:17.668159  292013 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-178152 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-178152 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1126 20:23:17.668231  292013 ssh_runner.go:195] Run: crio config
	I1126 20:23:17.728730  292013 cni.go:84] Creating CNI manager for ""
	I1126 20:23:17.728745  292013 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:23:17.728757  292013 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1126 20:23:17.728780  292013 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-178152 NodeName:default-k8s-diff-port-178152 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1126 20:23:17.728904  292013 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-178152"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1126 20:23:17.728961  292013 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1126 20:23:17.737340  292013 binaries.go:51] Found k8s binaries, skipping transfer
	I1126 20:23:17.737397  292013 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1126 20:23:17.744823  292013 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1126 20:23:17.757195  292013 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1126 20:23:17.769202  292013 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1126 20:23:17.782349  292013 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1126 20:23:17.786032  292013 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:23:17.795101  292013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:23:17.873013  292013 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:23:17.897757  292013 certs.go:69] Setting up /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/default-k8s-diff-port-178152 for IP: 192.168.85.2
	I1126 20:23:17.897775  292013 certs.go:195] generating shared ca certs ...
	I1126 20:23:17.897795  292013 certs.go:227] acquiring lock for ca certs: {Name:mk4795cc985d88cb969cdd6a9c35d3c72f02dfea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:23:17.897932  292013 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21974-10722/.minikube/ca.key
	I1126 20:23:17.897986  292013 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.key
	I1126 20:23:17.898001  292013 certs.go:257] generating profile certs ...
	I1126 20:23:17.898093  292013 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/default-k8s-diff-port-178152/client.key
	I1126 20:23:17.898162  292013 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/default-k8s-diff-port-178152/apiserver.key.e0e0c015
	I1126 20:23:17.898218  292013 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/default-k8s-diff-port-178152/proxy-client.key
	I1126 20:23:17.898357  292013 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/14258.pem (1338 bytes)
	W1126 20:23:17.898403  292013 certs.go:480] ignoring /home/jenkins/minikube-integration/21974-10722/.minikube/certs/14258_empty.pem, impossibly tiny 0 bytes
	I1126 20:23:17.898418  292013 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem (1679 bytes)
	I1126 20:23:17.898486  292013 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem (1078 bytes)
	I1126 20:23:17.898527  292013 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem (1123 bytes)
	I1126 20:23:17.898563  292013 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem (1675 bytes)
	I1126 20:23:17.898625  292013 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem (1708 bytes)
	I1126 20:23:17.899165  292013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1126 20:23:17.918784  292013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1126 20:23:17.937235  292013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1126 20:23:17.955598  292013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1126 20:23:17.978718  292013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/default-k8s-diff-port-178152/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1126 20:23:17.998328  292013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/default-k8s-diff-port-178152/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1126 20:23:18.014824  292013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/default-k8s-diff-port-178152/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1126 20:23:18.030942  292013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/default-k8s-diff-port-178152/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1126 20:23:18.047085  292013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem --> /usr/share/ca-certificates/142582.pem (1708 bytes)
	I1126 20:23:18.063322  292013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1126 20:23:18.079509  292013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/certs/14258.pem --> /usr/share/ca-certificates/14258.pem (1338 bytes)
	I1126 20:23:18.098732  292013 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1126 20:23:18.110426  292013 ssh_runner.go:195] Run: openssl version
	I1126 20:23:18.116110  292013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142582.pem && ln -fs /usr/share/ca-certificates/142582.pem /etc/ssl/certs/142582.pem"
	I1126 20:23:18.124052  292013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142582.pem
	I1126 20:23:18.127654  292013 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 26 19:41 /usr/share/ca-certificates/142582.pem
	I1126 20:23:18.127698  292013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142582.pem
	I1126 20:23:18.162629  292013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142582.pem /etc/ssl/certs/3ec20f2e.0"
	I1126 20:23:18.170348  292013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1126 20:23:18.178740  292013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:23:18.182764  292013 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 26 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:23:18.182806  292013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:23:18.234882  292013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1126 20:23:18.245881  292013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14258.pem && ln -fs /usr/share/ca-certificates/14258.pem /etc/ssl/certs/14258.pem"
	I1126 20:23:18.255606  292013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14258.pem
	I1126 20:23:18.259552  292013 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 26 19:41 /usr/share/ca-certificates/14258.pem
	I1126 20:23:18.259605  292013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14258.pem
	I1126 20:23:18.303253  292013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14258.pem /etc/ssl/certs/51391683.0"
	I1126 20:23:18.312096  292013 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1126 20:23:18.316008  292013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1126 20:23:18.350634  292013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1126 20:23:18.384786  292013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1126 20:23:18.431599  292013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1126 20:23:18.475753  292013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1126 20:23:18.526391  292013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1126 20:23:18.588346  292013 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-178152 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-178152 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:23:18.588449  292013 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 20:23:18.588565  292013 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 20:23:18.623451  292013 cri.go:89] found id: "851ab28993a8b771e5c0a2f66a99e6f22fbce78ae80259d843eb935540e459cb"
	I1126 20:23:18.623493  292013 cri.go:89] found id: "cd9d1e44673562de24369667807437c6a3c635356a7d55a049742bc674b8d8bb"
	I1126 20:23:18.623506  292013 cri.go:89] found id: "45aa87e14b73bdfe289340af54adbd31ea4129fb3bbd2a635ed0f95587efea73"
	I1126 20:23:18.623512  292013 cri.go:89] found id: "53d64e031f1a8bdf5d5b01443fbdf38f958b0ba7ea8c6514663f714eb5cedebf"
	I1126 20:23:18.623516  292013 cri.go:89] found id: ""
	I1126 20:23:18.623557  292013 ssh_runner.go:195] Run: sudo runc list -f json
	W1126 20:23:18.638001  292013 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:23:18Z" level=error msg="open /run/runc: no such file or directory"
	I1126 20:23:18.638079  292013 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1126 20:23:18.647325  292013 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1126 20:23:18.647339  292013 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1126 20:23:18.647376  292013 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1126 20:23:18.655974  292013 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1126 20:23:18.657075  292013 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-178152" does not appear in /home/jenkins/minikube-integration/21974-10722/kubeconfig
	I1126 20:23:18.657847  292013 kubeconfig.go:62] /home/jenkins/minikube-integration/21974-10722/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-178152" cluster setting kubeconfig missing "default-k8s-diff-port-178152" context setting]
	I1126 20:23:18.658988  292013 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/kubeconfig: {Name:mk6fc897a3190afd853d8f239c80394df10dbd39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:23:18.661117  292013 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1126 20:23:18.670104  292013 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1126 20:23:18.670133  292013 kubeadm.go:602] duration metric: took 22.788009ms to restartPrimaryControlPlane
	I1126 20:23:18.670142  292013 kubeadm.go:403] duration metric: took 81.823346ms to StartCluster
	I1126 20:23:18.670155  292013 settings.go:142] acquiring lock: {Name:mkeab3f1dbcfb88a16fda32bff13c520a9a811ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:23:18.670212  292013 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21974-10722/kubeconfig
	I1126 20:23:18.672246  292013 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/kubeconfig: {Name:mk6fc897a3190afd853d8f239c80394df10dbd39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:23:18.672794  292013 config.go:182] Loaded profile config "default-k8s-diff-port-178152": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:23:18.672844  292013 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1126 20:23:18.672980  292013 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1126 20:23:18.673056  292013 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-178152"
	I1126 20:23:18.673072  292013 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-178152"
	W1126 20:23:18.673080  292013 addons.go:248] addon storage-provisioner should already be in state true
	I1126 20:23:18.673108  292013 host.go:66] Checking if "default-k8s-diff-port-178152" exists ...
	I1126 20:23:18.673596  292013 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-178152 --format={{.State.Status}}
	I1126 20:23:18.673682  292013 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-178152"
	I1126 20:23:18.673710  292013 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-178152"
	W1126 20:23:18.673722  292013 addons.go:248] addon dashboard should already be in state true
	I1126 20:23:18.673756  292013 host.go:66] Checking if "default-k8s-diff-port-178152" exists ...
	I1126 20:23:18.673928  292013 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-178152"
	I1126 20:23:18.673951  292013 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-178152"
	I1126 20:23:18.674245  292013 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-178152 --format={{.State.Status}}
	I1126 20:23:18.674255  292013 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-178152 --format={{.State.Status}}
	I1126 20:23:18.675133  292013 out.go:179] * Verifying Kubernetes components...
	I1126 20:23:18.676193  292013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:23:18.701859  292013 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1126 20:23:18.701926  292013 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1126 20:23:18.703240  292013 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:23:18.703295  292013 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1126 20:23:18.703350  292013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-178152
	I1126 20:23:18.703745  292013 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1126 20:23:17.693567  279050 pod_ready.go:94] pod "coredns-66bc5c9577-wl4xp" is "Ready"
	I1126 20:23:17.693591  279050 pod_ready.go:86] duration metric: took 35.505181868s for pod "coredns-66bc5c9577-wl4xp" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:17.696095  279050 pod_ready.go:83] waiting for pod "etcd-no-preload-026579" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:17.713173  279050 pod_ready.go:94] pod "etcd-no-preload-026579" is "Ready"
	I1126 20:23:17.713232  279050 pod_ready.go:86] duration metric: took 17.078305ms for pod "etcd-no-preload-026579" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:17.718741  279050 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-026579" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:17.723995  279050 pod_ready.go:94] pod "kube-apiserver-no-preload-026579" is "Ready"
	I1126 20:23:17.724017  279050 pod_ready.go:86] duration metric: took 5.252182ms for pod "kube-apiserver-no-preload-026579" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:17.726428  279050 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-026579" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:17.894598  279050 pod_ready.go:94] pod "kube-controller-manager-no-preload-026579" is "Ready"
	I1126 20:23:17.894629  279050 pod_ready.go:86] duration metric: took 168.177715ms for pod "kube-controller-manager-no-preload-026579" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:18.091824  279050 pod_ready.go:83] waiting for pod "kube-proxy-ktbwp" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:18.492571  279050 pod_ready.go:94] pod "kube-proxy-ktbwp" is "Ready"
	I1126 20:23:18.492601  279050 pod_ready.go:86] duration metric: took 400.748457ms for pod "kube-proxy-ktbwp" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:18.693343  279050 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-026579" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:19.091809  279050 pod_ready.go:94] pod "kube-scheduler-no-preload-026579" is "Ready"
	I1126 20:23:19.091845  279050 pod_ready.go:86] duration metric: took 398.476699ms for pod "kube-scheduler-no-preload-026579" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:19.091860  279050 pod_ready.go:40] duration metric: took 36.906405377s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 20:23:19.153238  279050 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1126 20:23:19.155165  279050 out.go:179] * Done! kubectl is now configured to use "no-preload-026579" cluster and "default" namespace by default
	I1126 20:23:18.705569  292013 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-178152"
	W1126 20:23:18.705587  292013 addons.go:248] addon default-storageclass should already be in state true
	I1126 20:23:18.705612  292013 host.go:66] Checking if "default-k8s-diff-port-178152" exists ...
	I1126 20:23:18.706157  292013 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-178152 --format={{.State.Status}}
	I1126 20:23:18.706657  292013 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1126 20:23:18.706739  292013 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1126 20:23:18.706807  292013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-178152
	I1126 20:23:18.739172  292013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/default-k8s-diff-port-178152/id_rsa Username:docker}
	I1126 20:23:18.742081  292013 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1126 20:23:18.742144  292013 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1126 20:23:18.742203  292013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-178152
	I1126 20:23:18.743125  292013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/default-k8s-diff-port-178152/id_rsa Username:docker}
	I1126 20:23:18.771281  292013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/default-k8s-diff-port-178152/id_rsa Username:docker}
	I1126 20:23:18.836808  292013 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:23:18.849945  292013 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-178152" to be "Ready" ...
	I1126 20:23:18.858581  292013 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1126 20:23:18.858600  292013 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1126 20:23:18.864903  292013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:23:18.873985  292013 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1126 20:23:18.874003  292013 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1126 20:23:18.891785  292013 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1126 20:23:18.891800  292013 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1126 20:23:18.898868  292013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1126 20:23:18.914781  292013 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1126 20:23:18.914799  292013 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1126 20:23:18.940507  292013 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1126 20:23:18.940588  292013 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1126 20:23:18.961370  292013 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1126 20:23:18.961480  292013 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1126 20:23:18.979847  292013 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1126 20:23:18.979869  292013 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1126 20:23:18.997450  292013 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1126 20:23:18.997496  292013 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1126 20:23:19.014774  292013 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1126 20:23:19.014798  292013 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1126 20:23:19.030627  292013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1126 20:23:20.076513  292013 node_ready.go:49] node "default-k8s-diff-port-178152" is "Ready"
	I1126 20:23:20.076545  292013 node_ready.go:38] duration metric: took 1.226568266s for node "default-k8s-diff-port-178152" to be "Ready" ...
	I1126 20:23:20.076561  292013 api_server.go:52] waiting for apiserver process to appear ...
	I1126 20:23:20.076614  292013 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:23:20.650346  292013 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.785411832s)
	I1126 20:23:20.650423  292013 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.751538841s)
	I1126 20:23:20.650697  292013 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.620030532s)
	I1126 20:23:20.650744  292013 api_server.go:72] duration metric: took 1.977874686s to wait for apiserver process to appear ...
	I1126 20:23:20.650766  292013 api_server.go:88] waiting for apiserver healthz status ...
	I1126 20:23:20.650789  292013 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1126 20:23:20.652272  292013 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-178152 addons enable metrics-server
	
	I1126 20:23:20.655372  292013 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1126 20:23:20.655401  292013 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1126 20:23:20.659424  292013 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1126 20:23:20.660341  292013 addons.go:530] duration metric: took 1.987365178s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1126 20:23:21.151632  292013 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1126 20:23:21.157395  292013 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1126 20:23:21.157415  292013 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1126 20:23:18.885333  290654 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1126 20:23:19.301808  290654 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1126 20:23:19.695191  290654 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1126 20:23:19.695440  290654 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1126 20:23:19.825600  290654 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1126 20:23:20.340649  290654 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1126 20:23:20.724366  290654 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1126 20:23:21.485824  290654 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1126 20:23:21.625826  290654 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1126 20:23:21.626296  290654 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1126 20:23:21.629820  290654 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1126 20:23:18.214208  281230 pod_ready.go:104] pod "coredns-66bc5c9577-s8rrr" is not "Ready", error: <nil>
	W1126 20:23:20.709235  281230 pod_ready.go:104] pod "coredns-66bc5c9577-s8rrr" is not "Ready", error: <nil>
	I1126 20:23:21.631094  290654 out.go:252]   - Booting up control plane ...
	I1126 20:23:21.631238  290654 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1126 20:23:21.631371  290654 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1126 20:23:21.632360  290654 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1126 20:23:21.645214  290654 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1126 20:23:21.645361  290654 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1126 20:23:21.652406  290654 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1126 20:23:21.652729  290654 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1126 20:23:21.652815  290654 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1126 20:23:21.764903  290654 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1126 20:23:21.765102  290654 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1126 20:23:22.766639  290654 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001829324s
	I1126 20:23:22.771587  290654 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1126 20:23:22.771713  290654 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1126 20:23:22.771850  290654 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1126 20:23:22.771976  290654 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1126 20:23:22.710254  281230 pod_ready.go:104] pod "coredns-66bc5c9577-s8rrr" is not "Ready", error: <nil>
	I1126 20:23:23.209497  281230 pod_ready.go:94] pod "coredns-66bc5c9577-s8rrr" is "Ready"
	I1126 20:23:23.209526  281230 pod_ready.go:86] duration metric: took 35.005140298s for pod "coredns-66bc5c9577-s8rrr" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:23.212056  281230 pod_ready.go:83] waiting for pod "etcd-embed-certs-949294" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:23.215893  281230 pod_ready.go:94] pod "etcd-embed-certs-949294" is "Ready"
	I1126 20:23:23.215912  281230 pod_ready.go:86] duration metric: took 3.835439ms for pod "etcd-embed-certs-949294" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:23.217794  281230 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-949294" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:23.221490  281230 pod_ready.go:94] pod "kube-apiserver-embed-certs-949294" is "Ready"
	I1126 20:23:23.221507  281230 pod_ready.go:86] duration metric: took 3.693704ms for pod "kube-apiserver-embed-certs-949294" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:23.223412  281230 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-949294" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:23.408291  281230 pod_ready.go:94] pod "kube-controller-manager-embed-certs-949294" is "Ready"
	I1126 20:23:23.408318  281230 pod_ready.go:86] duration metric: took 184.882309ms for pod "kube-controller-manager-embed-certs-949294" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:23.608513  281230 pod_ready.go:83] waiting for pod "kube-proxy-qnjvr" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:24.008474  281230 pod_ready.go:94] pod "kube-proxy-qnjvr" is "Ready"
	I1126 20:23:24.008506  281230 pod_ready.go:86] duration metric: took 399.965276ms for pod "kube-proxy-qnjvr" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:24.207557  281230 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-949294" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:24.607949  281230 pod_ready.go:94] pod "kube-scheduler-embed-certs-949294" is "Ready"
	I1126 20:23:24.607973  281230 pod_ready.go:86] duration metric: took 400.390059ms for pod "kube-scheduler-embed-certs-949294" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:24.607985  281230 pod_ready.go:40] duration metric: took 36.408614043s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 20:23:24.660574  281230 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1126 20:23:24.662064  281230 out.go:179] * Done! kubectl is now configured to use "embed-certs-949294" cluster and "default" namespace by default
	I1126 20:23:21.651516  292013 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1126 20:23:21.655923  292013 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1126 20:23:21.656902  292013 api_server.go:141] control plane version: v1.34.1
	I1126 20:23:21.656929  292013 api_server.go:131] duration metric: took 1.00615123s to wait for apiserver health ...
	I1126 20:23:21.656939  292013 system_pods.go:43] waiting for kube-system pods to appear ...
	I1126 20:23:21.660424  292013 system_pods.go:59] 8 kube-system pods found
	I1126 20:23:21.660494  292013 system_pods.go:61] "coredns-66bc5c9577-tpmmm" [20166f90-76ba-4092-aab9-29683f4fc146] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:23:21.660509  292013 system_pods.go:61] "etcd-default-k8s-diff-port-178152" [b1b7fbf2-3a62-4441-9f2a-1fc512882320] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1126 20:23:21.660522  292013 system_pods.go:61] "kindnet-bmzz2" [ad4ae092-70cd-48b0-9099-854ccce3329d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1126 20:23:21.660530  292013 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-178152" [57a5fbc3-1a74-4cd9-af68-a9cb4053ee40] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1126 20:23:21.660541  292013 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-178152" [41dc1851-5cc2-414e-8ef8-b25e098649c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1126 20:23:21.660553  292013 system_pods.go:61] "kube-proxy-vd7fp" [37371ddf-6fde-4f46-a877-97f4112ff1b2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1126 20:23:21.660563  292013 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-178152" [f1a6a26d-5146-4ffc-9bb3-20349686988f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1126 20:23:21.660573  292013 system_pods.go:61] "storage-provisioner" [0ed42547-a316-4970-b4e7-f2157c68ac06] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 20:23:21.660578  292013 system_pods.go:74] duration metric: took 3.633523ms to wait for pod list to return data ...
	I1126 20:23:21.660586  292013 default_sa.go:34] waiting for default service account to be created ...
	I1126 20:23:21.662722  292013 default_sa.go:45] found service account: "default"
	I1126 20:23:21.662739  292013 default_sa.go:55] duration metric: took 2.147793ms for default service account to be created ...
	I1126 20:23:21.662747  292013 system_pods.go:116] waiting for k8s-apps to be running ...
	I1126 20:23:21.665171  292013 system_pods.go:86] 8 kube-system pods found
	I1126 20:23:21.665193  292013 system_pods.go:89] "coredns-66bc5c9577-tpmmm" [20166f90-76ba-4092-aab9-29683f4fc146] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:23:21.665209  292013 system_pods.go:89] "etcd-default-k8s-diff-port-178152" [b1b7fbf2-3a62-4441-9f2a-1fc512882320] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1126 20:23:21.665224  292013 system_pods.go:89] "kindnet-bmzz2" [ad4ae092-70cd-48b0-9099-854ccce3329d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1126 20:23:21.665236  292013 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-178152" [57a5fbc3-1a74-4cd9-af68-a9cb4053ee40] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1126 20:23:21.665250  292013 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-178152" [41dc1851-5cc2-414e-8ef8-b25e098649c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1126 20:23:21.665260  292013 system_pods.go:89] "kube-proxy-vd7fp" [37371ddf-6fde-4f46-a877-97f4112ff1b2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1126 20:23:21.665271  292013 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-178152" [f1a6a26d-5146-4ffc-9bb3-20349686988f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1126 20:23:21.665282  292013 system_pods.go:89] "storage-provisioner" [0ed42547-a316-4970-b4e7-f2157c68ac06] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 20:23:21.665293  292013 system_pods.go:126] duration metric: took 2.539795ms to wait for k8s-apps to be running ...
	I1126 20:23:21.665305  292013 system_svc.go:44] waiting for kubelet service to be running ....
	I1126 20:23:21.665350  292013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:23:21.679704  292013 system_svc.go:56] duration metric: took 14.393906ms WaitForService to wait for kubelet
	I1126 20:23:21.679732  292013 kubeadm.go:587] duration metric: took 3.006859665s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1126 20:23:21.679763  292013 node_conditions.go:102] verifying NodePressure condition ...
	I1126 20:23:21.683714  292013 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1126 20:23:21.683746  292013 node_conditions.go:123] node cpu capacity is 8
	I1126 20:23:21.683761  292013 node_conditions.go:105] duration metric: took 3.992542ms to run NodePressure ...
	I1126 20:23:21.683776  292013 start.go:242] waiting for startup goroutines ...
	I1126 20:23:21.683787  292013 start.go:247] waiting for cluster config update ...
	I1126 20:23:21.683803  292013 start.go:256] writing updated cluster config ...
	I1126 20:23:21.684090  292013 ssh_runner.go:195] Run: rm -f paused
	I1126 20:23:21.690737  292013 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 20:23:21.694957  292013 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-tpmmm" in "kube-system" namespace to be "Ready" or be gone ...
	W1126 20:23:23.700019  292013 pod_ready.go:104] pod "coredns-66bc5c9577-tpmmm" is not "Ready", error: <nil>
	W1126 20:23:25.700369  292013 pod_ready.go:104] pod "coredns-66bc5c9577-tpmmm" is not "Ready", error: <nil>
	I1126 20:23:24.334563  290654 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.562900759s
	I1126 20:23:25.096556  290654 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.32501021s
	I1126 20:23:26.773126  290654 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001480312s
	I1126 20:23:26.785982  290654 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1126 20:23:26.795346  290654 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1126 20:23:26.803771  290654 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1126 20:23:26.804039  290654 kubeadm.go:319] [mark-control-plane] Marking the node auto-825702 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1126 20:23:26.811540  290654 kubeadm.go:319] [bootstrap-token] Using token: cfepsv.ze7li0ueqiisv4u1
	I1126 20:23:26.812735  290654 out.go:252]   - Configuring RBAC rules ...
	I1126 20:23:26.812902  290654 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1126 20:23:26.815933  290654 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1126 20:23:26.822050  290654 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1126 20:23:26.824581  290654 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1126 20:23:26.827827  290654 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1126 20:23:26.830088  290654 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1126 20:23:27.180011  290654 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1126 20:23:27.604196  290654 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1126 20:23:28.179792  290654 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1126 20:23:28.181097  290654 kubeadm.go:319] 
	I1126 20:23:28.181178  290654 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1126 20:23:28.181188  290654 kubeadm.go:319] 
	I1126 20:23:28.181271  290654 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1126 20:23:28.181284  290654 kubeadm.go:319] 
	I1126 20:23:28.181314  290654 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1126 20:23:28.181393  290654 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1126 20:23:28.181508  290654 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1126 20:23:28.181518  290654 kubeadm.go:319] 
	I1126 20:23:28.181588  290654 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1126 20:23:28.181599  290654 kubeadm.go:319] 
	I1126 20:23:28.181662  290654 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1126 20:23:28.181671  290654 kubeadm.go:319] 
	I1126 20:23:28.181740  290654 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1126 20:23:28.181890  290654 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1126 20:23:28.181992  290654 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1126 20:23:28.182004  290654 kubeadm.go:319] 
	I1126 20:23:28.182118  290654 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1126 20:23:28.182257  290654 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1126 20:23:28.182277  290654 kubeadm.go:319] 
	I1126 20:23:28.182389  290654 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token cfepsv.ze7li0ueqiisv4u1 \
	I1126 20:23:28.182607  290654 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:9cd90c79a54686c211eb288f3e905de978807b759c474caa965189d211d6551b \
	I1126 20:23:28.182665  290654 kubeadm.go:319] 	--control-plane 
	I1126 20:23:28.182683  290654 kubeadm.go:319] 
	I1126 20:23:28.182781  290654 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1126 20:23:28.182794  290654 kubeadm.go:319] 
	I1126 20:23:28.182920  290654 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token cfepsv.ze7li0ueqiisv4u1 \
	I1126 20:23:28.183058  290654 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:9cd90c79a54686c211eb288f3e905de978807b759c474caa965189d211d6551b 
	I1126 20:23:28.186330  290654 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1126 20:23:28.186520  290654 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1126 20:23:28.186555  290654 cni.go:84] Creating CNI manager for ""
	I1126 20:23:28.186568  290654 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:23:28.189613  290654 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1126 20:23:27.701613  292013 pod_ready.go:104] pod "coredns-66bc5c9577-tpmmm" is not "Ready", error: <nil>
	W1126 20:23:30.200387  292013 pod_ready.go:104] pod "coredns-66bc5c9577-tpmmm" is not "Ready", error: <nil>
	I1126 20:23:28.190997  290654 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1126 20:23:28.196682  290654 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1126 20:23:28.196700  290654 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1126 20:23:28.212764  290654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1126 20:23:28.451574  290654 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1126 20:23:28.451657  290654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:23:28.451736  290654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-825702 minikube.k8s.io/updated_at=2025_11_26T20_23_28_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970 minikube.k8s.io/name=auto-825702 minikube.k8s.io/primary=true
	I1126 20:23:28.594872  290654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:23:28.594872  290654 ops.go:34] apiserver oom_adj: -16
	I1126 20:23:29.095986  290654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:23:29.595806  290654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:23:30.095675  290654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:23:30.595668  290654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:23:31.095846  290654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:23:31.595663  290654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:23:32.095085  290654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:23:32.595453  290654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:23:33.095688  290654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:23:33.186891  290654 kubeadm.go:1114] duration metric: took 4.735299943s to wait for elevateKubeSystemPrivileges
	I1126 20:23:33.187041  290654 kubeadm.go:403] duration metric: took 18.004754645s to StartCluster
	I1126 20:23:33.187069  290654 settings.go:142] acquiring lock: {Name:mkeab3f1dbcfb88a16fda32bff13c520a9a811ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:23:33.187159  290654 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21974-10722/kubeconfig
	I1126 20:23:33.189959  290654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/kubeconfig: {Name:mk6fc897a3190afd853d8f239c80394df10dbd39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:23:33.190264  290654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1126 20:23:33.190276  290654 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1126 20:23:33.190348  290654 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1126 20:23:33.190418  290654 addons.go:70] Setting storage-provisioner=true in profile "auto-825702"
	I1126 20:23:33.190430  290654 addons.go:239] Setting addon storage-provisioner=true in "auto-825702"
	I1126 20:23:33.190452  290654 host.go:66] Checking if "auto-825702" exists ...
	I1126 20:23:33.190569  290654 config.go:182] Loaded profile config "auto-825702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:23:33.190656  290654 addons.go:70] Setting default-storageclass=true in profile "auto-825702"
	I1126 20:23:33.190674  290654 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-825702"
	I1126 20:23:33.190997  290654 cli_runner.go:164] Run: docker container inspect auto-825702 --format={{.State.Status}}
	I1126 20:23:33.191067  290654 cli_runner.go:164] Run: docker container inspect auto-825702 --format={{.State.Status}}
	I1126 20:23:33.191443  290654 out.go:179] * Verifying Kubernetes components...
	I1126 20:23:33.193928  290654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:23:33.219111  290654 addons.go:239] Setting addon default-storageclass=true in "auto-825702"
	I1126 20:23:33.219159  290654 host.go:66] Checking if "auto-825702" exists ...
	I1126 20:23:33.219759  290654 cli_runner.go:164] Run: docker container inspect auto-825702 --format={{.State.Status}}
	I1126 20:23:33.220320  290654 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1126 20:23:33.221988  290654 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:23:33.222009  290654 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1126 20:23:33.222065  290654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-825702
	I1126 20:23:33.245865  290654 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/auto-825702/id_rsa Username:docker}
	I1126 20:23:33.249499  290654 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1126 20:23:33.249519  290654 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1126 20:23:33.249591  290654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-825702
	I1126 20:23:33.273726  290654 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/auto-825702/id_rsa Username:docker}
	I1126 20:23:33.288579  290654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1126 20:23:33.352015  290654 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:23:33.372793  290654 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:23:33.394803  290654 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1126 20:23:33.477827  290654 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1126 20:23:33.480376  290654 node_ready.go:35] waiting up to 15m0s for node "auto-825702" to be "Ready" ...
	I1126 20:23:33.693090  290654 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	
	
	==> CRI-O <==
	Nov 26 20:23:22 no-preload-026579 crio[566]: time="2025-11-26T20:23:22.269509096Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 26 20:23:22 no-preload-026579 crio[566]: time="2025-11-26T20:23:22.269541179Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 26 20:23:22 no-preload-026579 crio[566]: time="2025-11-26T20:23:22.269564678Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 26 20:23:22 no-preload-026579 crio[566]: time="2025-11-26T20:23:22.275264124Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 26 20:23:22 no-preload-026579 crio[566]: time="2025-11-26T20:23:22.275295501Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 26 20:23:22 no-preload-026579 crio[566]: time="2025-11-26T20:23:22.275321955Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 26 20:23:22 no-preload-026579 crio[566]: time="2025-11-26T20:23:22.279381026Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 26 20:23:22 no-preload-026579 crio[566]: time="2025-11-26T20:23:22.279404988Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 26 20:23:22 no-preload-026579 crio[566]: time="2025-11-26T20:23:22.279424435Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 26 20:23:22 no-preload-026579 crio[566]: time="2025-11-26T20:23:22.283307271Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 26 20:23:22 no-preload-026579 crio[566]: time="2025-11-26T20:23:22.283331882Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 26 20:23:22 no-preload-026579 crio[566]: time="2025-11-26T20:23:22.283352449Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 26 20:23:22 no-preload-026579 crio[566]: time="2025-11-26T20:23:22.286767996Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 26 20:23:22 no-preload-026579 crio[566]: time="2025-11-26T20:23:22.286787703Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 26 20:23:28 no-preload-026579 crio[566]: time="2025-11-26T20:23:28.518813434Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=fa01afeb-c15b-4c37-9eb6-30991b90ab9e name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:23:28 no-preload-026579 crio[566]: time="2025-11-26T20:23:28.519979116Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=1d6a264f-f491-4597-a954-b3d4b0525848 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:23:28 no-preload-026579 crio[566]: time="2025-11-26T20:23:28.521634068Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9crds/dashboard-metrics-scraper" id=eaf205ab-f787-46ad-aba9-dd4128b5aab3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:23:28 no-preload-026579 crio[566]: time="2025-11-26T20:23:28.522416351Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:23:28 no-preload-026579 crio[566]: time="2025-11-26T20:23:28.536752656Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:23:28 no-preload-026579 crio[566]: time="2025-11-26T20:23:28.537608948Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:23:28 no-preload-026579 crio[566]: time="2025-11-26T20:23:28.583417415Z" level=info msg="Created container d3aa9d6833a17ddfa7604b0ff6d901e9ecd66bef0a9f6abdb2dbe075384fe401: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9crds/dashboard-metrics-scraper" id=eaf205ab-f787-46ad-aba9-dd4128b5aab3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:23:28 no-preload-026579 crio[566]: time="2025-11-26T20:23:28.585143775Z" level=info msg="Starting container: d3aa9d6833a17ddfa7604b0ff6d901e9ecd66bef0a9f6abdb2dbe075384fe401" id=70c7049b-3d22-4496-be03-2e0c6189f699 name=/runtime.v1.RuntimeService/StartContainer
	Nov 26 20:23:28 no-preload-026579 crio[566]: time="2025-11-26T20:23:28.589656663Z" level=info msg="Started container" PID=1782 containerID=d3aa9d6833a17ddfa7604b0ff6d901e9ecd66bef0a9f6abdb2dbe075384fe401 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9crds/dashboard-metrics-scraper id=70c7049b-3d22-4496-be03-2e0c6189f699 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fa4cb50f83fdf272504a6f63af12a824b04062de7c1882d1db85de91c42a5637
	Nov 26 20:23:28 no-preload-026579 crio[566]: time="2025-11-26T20:23:28.687337437Z" level=info msg="Removing container: a949b6bc71056991eb7a8f848ac1159bfbd5a5f00cb1af85e7a397392f3606a0" id=5d55905d-a668-432d-9649-4fbb8b161cc6 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 26 20:23:28 no-preload-026579 crio[566]: time="2025-11-26T20:23:28.700225318Z" level=info msg="Removed container a949b6bc71056991eb7a8f848ac1159bfbd5a5f00cb1af85e7a397392f3606a0: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9crds/dashboard-metrics-scraper" id=5d55905d-a668-432d-9649-4fbb8b161cc6 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	d3aa9d6833a17       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           5 seconds ago       Exited              dashboard-metrics-scraper   3                   fa4cb50f83fdf       dashboard-metrics-scraper-6ffb444bf9-9crds   kubernetes-dashboard
	7f8751fb0a65f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           21 seconds ago      Running             storage-provisioner         1                   54c7878020476       storage-provisioner                          kube-system
	2a10859cf5d56       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   45 seconds ago      Running             kubernetes-dashboard        0                   bf7f86d8a6e14       kubernetes-dashboard-855c9754f9-vghzh        kubernetes-dashboard
	978ebe7980e56       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           52 seconds ago      Running             busybox                     1                   f7d098c06913e       busybox                                      default
	5e01aebce0762       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           52 seconds ago      Running             coredns                     0                   a5c8ef94f6ae4       coredns-66bc5c9577-wl4xp                     kube-system
	3bda7bcb07b58       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           52 seconds ago      Running             kube-proxy                  0                   541a87cf67ff6       kube-proxy-ktbwp                             kube-system
	32e35f17feb89       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           52 seconds ago      Exited              storage-provisioner         0                   54c7878020476       storage-provisioner                          kube-system
	42828f83720fe       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           52 seconds ago      Running             kindnet-cni                 0                   958a716237e0f       kindnet-8rfpj                                kube-system
	5a4a0e2af1862       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           55 seconds ago      Running             kube-apiserver              0                   a96f723a1547a       kube-apiserver-no-preload-026579             kube-system
	bbe6a4946c008       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           55 seconds ago      Running             kube-scheduler              0                   25ad52b01dfd7       kube-scheduler-no-preload-026579             kube-system
	d4451cac813aa       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           55 seconds ago      Running             kube-controller-manager     0                   014515d76839a       kube-controller-manager-no-preload-026579    kube-system
	590f69567c94c       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           55 seconds ago      Running             etcd                        0                   25d55a4236642       etcd-no-preload-026579                       kube-system
	
	
	==> coredns [5e01aebce076296d3ed66d96beac70b5dd1097a82f3df65932b28a11c416eb6f] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46632 - 43997 "HINFO IN 4146496324711485698.5438811130630472017. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.073809414s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-026579
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-026579
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970
	                    minikube.k8s.io/name=no-preload-026579
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_26T20_21_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 26 Nov 2025 20:21:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-026579
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 26 Nov 2025 20:23:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 26 Nov 2025 20:23:21 +0000   Wed, 26 Nov 2025 20:21:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 26 Nov 2025 20:23:21 +0000   Wed, 26 Nov 2025 20:21:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 26 Nov 2025 20:23:21 +0000   Wed, 26 Nov 2025 20:21:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 26 Nov 2025 20:23:21 +0000   Wed, 26 Nov 2025 20:22:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-026579
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                896379d3-12e9-47c2-b887-9f21dde83abe
	  Boot ID:                    4ab83601-3d95-4490-96bc-14b416ba2714
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-66bc5c9577-wl4xp                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     106s
	  kube-system                 etcd-no-preload-026579                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         112s
	  kube-system                 kindnet-8rfpj                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      106s
	  kube-system                 kube-apiserver-no-preload-026579              250m (3%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-controller-manager-no-preload-026579     200m (2%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-proxy-ktbwp                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-scheduler-no-preload-026579              100m (1%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-9crds    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-vghzh         0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 105s                 kube-proxy       
	  Normal  Starting                 52s                  kube-proxy       
	  Normal  Starting                 116s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  116s (x8 over 116s)  kubelet          Node no-preload-026579 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    116s (x8 over 116s)  kubelet          Node no-preload-026579 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     116s (x8 over 116s)  kubelet          Node no-preload-026579 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    112s                 kubelet          Node no-preload-026579 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  112s                 kubelet          Node no-preload-026579 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     112s                 kubelet          Node no-preload-026579 status is now: NodeHasSufficientPID
	  Normal  Starting                 112s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           107s                 node-controller  Node no-preload-026579 event: Registered Node no-preload-026579 in Controller
	  Normal  NodeReady                93s                  kubelet          Node no-preload-026579 status is now: NodeReady
	  Normal  Starting                 56s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  56s (x8 over 56s)    kubelet          Node no-preload-026579 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    56s (x8 over 56s)    kubelet          Node no-preload-026579 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     56s (x8 over 56s)    kubelet          Node no-preload-026579 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           51s                  node-controller  Node no-preload-026579 event: Registered Node no-preload-026579 in Controller
	
	
	==> dmesg <==
	[  +0.091832] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023778] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.010162] kauditd_printk_skb: 47 callbacks suppressed
	[Nov26 19:37] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.011177] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.022896] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.024869] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.022915] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.023864] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +2.047793] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +4.032568] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +8.126198] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[ +16.382331] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[Nov26 19:38] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	
	
	==> etcd [590f69567c94cb9063adb6b5b5bfedd56c93afe7320fc6431800372b19411ff4] <==
	{"level":"warn","ts":"2025-11-26T20:22:39.904659Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:39.910332Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:39.916491Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:39.923126Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:39.929604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:39.936349Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:39.943067Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:39.950226Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:39.964493Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:39.972416Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:39.978512Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:39.985520Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:39.992770Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:39.999830Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:40.007796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:40.015679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:40.024554Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:40.033153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:40.042315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:40.057188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:40.065053Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:40.074364Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:40.136450Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:23:07.261409Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"125.143241ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356913385038398 > lease_revoke:<id:59069ac1d516e1ac>","response":"size:28"}
	{"level":"info","ts":"2025-11-26T20:23:07.609663Z","caller":"traceutil/trace.go:172","msg":"trace[1242984989] transaction","detail":"{read_only:false; response_revision:641; number_of_response:1; }","duration":"146.689979ms","start":"2025-11-26T20:23:07.462950Z","end":"2025-11-26T20:23:07.609640Z","steps":["trace[1242984989] 'process raft request'  (duration: 146.471159ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:23:34 up  1:06,  0 user,  load average: 3.22, 3.14, 2.12
	Linux no-preload-026579 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [42828f83720fe35dd743e7a5c065fe0193e00746358e315543e3a492e04cddc2] <==
	I1126 20:22:42.055393       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1126 20:22:42.055613       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1126 20:22:42.055753       1 main.go:148] setting mtu 1500 for CNI 
	I1126 20:22:42.055770       1 main.go:178] kindnetd IP family: "ipv4"
	I1126 20:22:42.055790       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-26T20:22:42Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1126 20:22:42.264824       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1126 20:22:42.264843       1 controller.go:381] "Waiting for informer caches to sync"
	I1126 20:22:42.264853       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1126 20:22:42.265012       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1126 20:23:12.264878       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1126 20:23:12.264955       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1126 20:23:12.265068       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1126 20:23:12.265076       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1126 20:23:13.764985       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1126 20:23:13.765026       1 metrics.go:72] Registering metrics
	I1126 20:23:13.765094       1 controller.go:711] "Syncing nftables rules"
	I1126 20:23:22.264601       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1126 20:23:22.264656       1 main.go:301] handling current node
	I1126 20:23:32.266367       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1126 20:23:32.266409       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5a4a0e2af1862b25492b690a7a862bef76306a1889835ec8f86fa09057dad6a9] <==
	I1126 20:22:40.647421       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1126 20:22:40.650837       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1126 20:22:40.654303       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1126 20:22:40.654333       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1126 20:22:40.654342       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1126 20:22:40.654355       1 aggregator.go:171] initial CRD sync complete...
	I1126 20:22:40.654363       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1126 20:22:40.654366       1 autoregister_controller.go:144] Starting autoregister controller
	I1126 20:22:40.654583       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1126 20:22:40.654596       1 cache.go:39] Caches are synced for autoregister controller
	I1126 20:22:40.654517       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1126 20:22:40.654408       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1126 20:22:40.654410       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1126 20:22:40.662245       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1126 20:22:40.912325       1 controller.go:667] quota admission added evaluator for: namespaces
	I1126 20:22:40.941907       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1126 20:22:40.975705       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1126 20:22:40.991133       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1126 20:22:40.997406       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1126 20:22:41.027780       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.115.129"}
	I1126 20:22:41.044410       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.81.142"}
	I1126 20:22:41.534527       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1126 20:22:43.981790       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1126 20:22:44.480972       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1126 20:22:44.531670       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [d4451cac813aa54ddc31652afec1d7e8194495b96f8e083719c0cc55f90393e6] <==
	I1126 20:22:43.984393       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1126 20:22:43.985267       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1126 20:22:43.987498       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1126 20:22:43.988664       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1126 20:22:43.989839       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1126 20:22:43.992159       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1126 20:22:43.992225       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1126 20:22:43.992164       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1126 20:22:43.992286       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-026579"
	I1126 20:22:43.992350       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1126 20:22:43.996231       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1126 20:22:43.998513       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1126 20:22:44.000712       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1126 20:22:44.003990       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1126 20:22:44.026930       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1126 20:22:44.027037       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1126 20:22:44.028250       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1126 20:22:44.028374       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1126 20:22:44.028473       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1126 20:22:44.028658       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1126 20:22:44.033895       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1126 20:22:44.033915       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1126 20:22:44.033926       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1126 20:22:44.034993       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1126 20:22:44.037718       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	
	
	==> kube-proxy [3bda7bcb07b58f6c63b7516471b9f44f6a342dfe7e086d6c5c44bb0df06149f9] <==
	I1126 20:22:41.921812       1 server_linux.go:53] "Using iptables proxy"
	I1126 20:22:41.975399       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1126 20:22:42.075553       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1126 20:22:42.075611       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1126 20:22:42.075735       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1126 20:22:42.092809       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1126 20:22:42.092857       1 server_linux.go:132] "Using iptables Proxier"
	I1126 20:22:42.097613       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1126 20:22:42.097977       1 server.go:527] "Version info" version="v1.34.1"
	I1126 20:22:42.097992       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 20:22:42.099200       1 config.go:106] "Starting endpoint slice config controller"
	I1126 20:22:42.099227       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1126 20:22:42.099253       1 config.go:403] "Starting serviceCIDR config controller"
	I1126 20:22:42.099268       1 config.go:200] "Starting service config controller"
	I1126 20:22:42.099272       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1126 20:22:42.099274       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1126 20:22:42.099515       1 config.go:309] "Starting node config controller"
	I1126 20:22:42.099533       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1126 20:22:42.099541       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1126 20:22:42.200284       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1126 20:22:42.200301       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1126 20:22:42.200303       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [bbe6a4946c0088614f8a872edc385ef4bf724c29b1ba7f173936d426a0fc26e3] <==
	I1126 20:22:39.546629       1 serving.go:386] Generated self-signed cert in-memory
	I1126 20:22:40.606299       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1126 20:22:40.606396       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 20:22:40.612982       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1126 20:22:40.613078       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1126 20:22:40.613128       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1126 20:22:40.613139       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1126 20:22:40.613179       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1126 20:22:40.613188       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1126 20:22:40.613426       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1126 20:22:40.613603       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1126 20:22:40.713888       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1126 20:22:40.713889       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1126 20:22:40.713934       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 26 20:22:47 no-preload-026579 kubelet[716]: I1126 20:22:47.456375     716 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 26 20:22:51 no-preload-026579 kubelet[716]: I1126 20:22:51.036691     716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-vghzh" podStartSLOduration=2.703342485 podStartE2EDuration="7.036667655s" podCreationTimestamp="2025-11-26 20:22:44 +0000 UTC" firstStartedPulling="2025-11-26 20:22:44.950632649 +0000 UTC m=+6.539870215" lastFinishedPulling="2025-11-26 20:22:49.283957803 +0000 UTC m=+10.873195385" observedRunningTime="2025-11-26 20:22:49.591448745 +0000 UTC m=+11.180686326" watchObservedRunningTime="2025-11-26 20:22:51.036667655 +0000 UTC m=+12.625905239"
	Nov 26 20:22:52 no-preload-026579 kubelet[716]: I1126 20:22:52.582167     716 scope.go:117] "RemoveContainer" containerID="df0e17da5612b2815f13652663efc8c2ee00be08301bd7430754174460494590"
	Nov 26 20:22:53 no-preload-026579 kubelet[716]: I1126 20:22:53.586545     716 scope.go:117] "RemoveContainer" containerID="df0e17da5612b2815f13652663efc8c2ee00be08301bd7430754174460494590"
	Nov 26 20:22:53 no-preload-026579 kubelet[716]: I1126 20:22:53.586939     716 scope.go:117] "RemoveContainer" containerID="82a1c326306eba14c3b3b80512d88cc8fa49036be32a702d1abc5af6da2a7d96"
	Nov 26 20:22:53 no-preload-026579 kubelet[716]: E1126 20:22:53.587112     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9crds_kubernetes-dashboard(d652bea3-c93c-4333-ba75-f7f035746dd2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9crds" podUID="d652bea3-c93c-4333-ba75-f7f035746dd2"
	Nov 26 20:22:54 no-preload-026579 kubelet[716]: I1126 20:22:54.590444     716 scope.go:117] "RemoveContainer" containerID="82a1c326306eba14c3b3b80512d88cc8fa49036be32a702d1abc5af6da2a7d96"
	Nov 26 20:22:54 no-preload-026579 kubelet[716]: E1126 20:22:54.590710     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9crds_kubernetes-dashboard(d652bea3-c93c-4333-ba75-f7f035746dd2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9crds" podUID="d652bea3-c93c-4333-ba75-f7f035746dd2"
	Nov 26 20:22:56 no-preload-026579 kubelet[716]: I1126 20:22:56.401442     716 scope.go:117] "RemoveContainer" containerID="82a1c326306eba14c3b3b80512d88cc8fa49036be32a702d1abc5af6da2a7d96"
	Nov 26 20:22:56 no-preload-026579 kubelet[716]: E1126 20:22:56.401719     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9crds_kubernetes-dashboard(d652bea3-c93c-4333-ba75-f7f035746dd2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9crds" podUID="d652bea3-c93c-4333-ba75-f7f035746dd2"
	Nov 26 20:23:07 no-preload-026579 kubelet[716]: I1126 20:23:07.516826     716 scope.go:117] "RemoveContainer" containerID="82a1c326306eba14c3b3b80512d88cc8fa49036be32a702d1abc5af6da2a7d96"
	Nov 26 20:23:08 no-preload-026579 kubelet[716]: I1126 20:23:08.629384     716 scope.go:117] "RemoveContainer" containerID="82a1c326306eba14c3b3b80512d88cc8fa49036be32a702d1abc5af6da2a7d96"
	Nov 26 20:23:08 no-preload-026579 kubelet[716]: I1126 20:23:08.629947     716 scope.go:117] "RemoveContainer" containerID="a949b6bc71056991eb7a8f848ac1159bfbd5a5f00cb1af85e7a397392f3606a0"
	Nov 26 20:23:08 no-preload-026579 kubelet[716]: E1126 20:23:08.630176     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9crds_kubernetes-dashboard(d652bea3-c93c-4333-ba75-f7f035746dd2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9crds" podUID="d652bea3-c93c-4333-ba75-f7f035746dd2"
	Nov 26 20:23:12 no-preload-026579 kubelet[716]: I1126 20:23:12.640862     716 scope.go:117] "RemoveContainer" containerID="32e35f17feb89664f1fae773b335343aa39c1e0842c3b1752b38b7722a5b602f"
	Nov 26 20:23:16 no-preload-026579 kubelet[716]: I1126 20:23:16.401828     716 scope.go:117] "RemoveContainer" containerID="a949b6bc71056991eb7a8f848ac1159bfbd5a5f00cb1af85e7a397392f3606a0"
	Nov 26 20:23:16 no-preload-026579 kubelet[716]: E1126 20:23:16.402072     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9crds_kubernetes-dashboard(d652bea3-c93c-4333-ba75-f7f035746dd2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9crds" podUID="d652bea3-c93c-4333-ba75-f7f035746dd2"
	Nov 26 20:23:28 no-preload-026579 kubelet[716]: I1126 20:23:28.517964     716 scope.go:117] "RemoveContainer" containerID="a949b6bc71056991eb7a8f848ac1159bfbd5a5f00cb1af85e7a397392f3606a0"
	Nov 26 20:23:28 no-preload-026579 kubelet[716]: I1126 20:23:28.685263     716 scope.go:117] "RemoveContainer" containerID="a949b6bc71056991eb7a8f848ac1159bfbd5a5f00cb1af85e7a397392f3606a0"
	Nov 26 20:23:28 no-preload-026579 kubelet[716]: I1126 20:23:28.685500     716 scope.go:117] "RemoveContainer" containerID="d3aa9d6833a17ddfa7604b0ff6d901e9ecd66bef0a9f6abdb2dbe075384fe401"
	Nov 26 20:23:28 no-preload-026579 kubelet[716]: E1126 20:23:28.685729     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9crds_kubernetes-dashboard(d652bea3-c93c-4333-ba75-f7f035746dd2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9crds" podUID="d652bea3-c93c-4333-ba75-f7f035746dd2"
	Nov 26 20:23:31 no-preload-026579 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 26 20:23:31 no-preload-026579 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 26 20:23:31 no-preload-026579 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 26 20:23:31 no-preload-026579 systemd[1]: kubelet.service: Consumed 1.595s CPU time.
	
	
	==> kubernetes-dashboard [2a10859cf5d562e2b2c673c950cfcf54f0d04d81fa0b79c317ba79e924252860] <==
	2025/11/26 20:22:49 Starting overwatch
	2025/11/26 20:22:49 Using namespace: kubernetes-dashboard
	2025/11/26 20:22:49 Using in-cluster config to connect to apiserver
	2025/11/26 20:22:49 Using secret token for csrf signing
	2025/11/26 20:22:49 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/26 20:22:49 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/26 20:22:49 Successful initial request to the apiserver, version: v1.34.1
	2025/11/26 20:22:49 Generating JWE encryption key
	2025/11/26 20:22:49 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/26 20:22:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/26 20:22:49 Initializing JWE encryption key from synchronized object
	2025/11/26 20:22:49 Creating in-cluster Sidecar client
	2025/11/26 20:22:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/26 20:22:49 Serving insecurely on HTTP port: 9090
	2025/11/26 20:23:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [32e35f17feb89664f1fae773b335343aa39c1e0842c3b1752b38b7722a5b602f] <==
	I1126 20:22:41.887783       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1126 20:23:11.889870       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [7f8751fb0a65f1680045ae79ecb0521ff69da6a9ef59fff50fa749d20fb9cf69] <==
	I1126 20:23:12.687863       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1126 20:23:12.696308       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1126 20:23:12.696349       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1126 20:23:12.698033       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:23:16.152729       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:23:20.413536       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:23:24.011713       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:23:27.065519       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:23:30.087940       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:23:30.098096       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1126 20:23:30.098269       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1126 20:23:30.098491       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-026579_e07b4716-0145-4ebe-8624-1e2a80bb0f4f!
	I1126 20:23:30.098403       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"001ce068-24a6-4540-989a-014660d8c6e6", APIVersion:"v1", ResourceVersion:"672", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-026579_e07b4716-0145-4ebe-8624-1e2a80bb0f4f became leader
	W1126 20:23:30.105287       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:23:30.109830       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1126 20:23:30.199511       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-026579_e07b4716-0145-4ebe-8624-1e2a80bb0f4f!
	W1126 20:23:32.113019       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:23:32.116894       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:23:34.120387       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:23:34.125505       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-026579 -n no-preload-026579
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-026579 -n no-preload-026579: exit status 2 (325.141391ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-026579 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-026579
helpers_test.go:243: (dbg) docker inspect no-preload-026579:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9844cee89f7d44f0de3996dfc6f6df4e68130d5e18c7a43d8cc08597e0863f32",
	        "Created": "2025-11-26T20:21:13.866220209Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 279357,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-26T20:22:31.957869863Z",
	            "FinishedAt": "2025-11-26T20:22:31.022327472Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/9844cee89f7d44f0de3996dfc6f6df4e68130d5e18c7a43d8cc08597e0863f32/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9844cee89f7d44f0de3996dfc6f6df4e68130d5e18c7a43d8cc08597e0863f32/hostname",
	        "HostsPath": "/var/lib/docker/containers/9844cee89f7d44f0de3996dfc6f6df4e68130d5e18c7a43d8cc08597e0863f32/hosts",
	        "LogPath": "/var/lib/docker/containers/9844cee89f7d44f0de3996dfc6f6df4e68130d5e18c7a43d8cc08597e0863f32/9844cee89f7d44f0de3996dfc6f6df4e68130d5e18c7a43d8cc08597e0863f32-json.log",
	        "Name": "/no-preload-026579",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-026579:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-026579",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9844cee89f7d44f0de3996dfc6f6df4e68130d5e18c7a43d8cc08597e0863f32",
	                "LowerDir": "/var/lib/docker/overlay2/ba0da6391fe48e2d4ac14de16184303d8e1d2450a851e687b5256f0e38c43759-init/diff:/var/lib/docker/overlay2/1661d775281cb7314a654558ee2c4e5880ffafb3a27a085032449cd60a68753a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ba0da6391fe48e2d4ac14de16184303d8e1d2450a851e687b5256f0e38c43759/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ba0da6391fe48e2d4ac14de16184303d8e1d2450a851e687b5256f0e38c43759/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ba0da6391fe48e2d4ac14de16184303d8e1d2450a851e687b5256f0e38c43759/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-026579",
	                "Source": "/var/lib/docker/volumes/no-preload-026579/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-026579",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-026579",
	                "name.minikube.sigs.k8s.io": "no-preload-026579",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "a2277f55c8ef1902ede7e3f4b3d395a458080b11744e33079f1231a9528121fe",
	            "SandboxKey": "/var/run/docker/netns/a2277f55c8ef",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-026579": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2ae6f13df7ae90e563079e045184e161803a9312deeafb40deb6a3cda467fd0e",
	                    "EndpointID": "c2bae13b0e015af82c493bb224fd936eb5ba6d97dac61b1b4af008a630c15558",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "76:0a:e1:f1:ed:2e",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-026579",
	                        "9844cee89f7d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-026579 -n no-preload-026579
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-026579 -n no-preload-026579: exit status 2 (316.211954ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-026579 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-026579 logs -n 25: (1.085892208s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p default-k8s-diff-port-178152 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-178152 │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ start   │ -p newest-cni-297942 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-297942            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ addons  │ enable metrics-server -p no-preload-026579 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-026579            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │                     │
	│ stop    │ -p no-preload-026579 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-026579            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ addons  │ enable metrics-server -p embed-certs-949294 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-949294           │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │                     │
	│ stop    │ -p embed-certs-949294 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-949294           │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ addons  │ enable dashboard -p no-preload-026579 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-026579            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ start   │ -p no-preload-026579 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-026579            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:23 UTC │
	│ addons  │ enable metrics-server -p newest-cni-297942 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-297942            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-949294 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-949294           │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ start   │ -p embed-certs-949294 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-949294           │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:23 UTC │
	│ stop    │ -p newest-cni-297942 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-297942            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ addons  │ enable dashboard -p newest-cni-297942 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-297942            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ start   │ -p newest-cni-297942 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-297942            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-178152 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-178152 │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │                     │
	│ image   │ newest-cni-297942 image list --format=json                                                                                                                                                                                                    │ newest-cni-297942            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ pause   │ -p newest-cni-297942 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-297942            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-178152 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-178152 │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:23 UTC │
	│ delete  │ -p newest-cni-297942                                                                                                                                                                                                                          │ newest-cni-297942            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:23 UTC │
	│ delete  │ -p newest-cni-297942                                                                                                                                                                                                                          │ newest-cni-297942            │ jenkins │ v1.37.0 │ 26 Nov 25 20:23 UTC │ 26 Nov 25 20:23 UTC │
	│ start   │ -p auto-825702 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-825702                  │ jenkins │ v1.37.0 │ 26 Nov 25 20:23 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-178152 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-178152 │ jenkins │ v1.37.0 │ 26 Nov 25 20:23 UTC │ 26 Nov 25 20:23 UTC │
	│ start   │ -p default-k8s-diff-port-178152 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-178152 │ jenkins │ v1.37.0 │ 26 Nov 25 20:23 UTC │                     │
	│ image   │ no-preload-026579 image list --format=json                                                                                                                                                                                                    │ no-preload-026579            │ jenkins │ v1.37.0 │ 26 Nov 25 20:23 UTC │ 26 Nov 25 20:23 UTC │
	│ pause   │ -p no-preload-026579 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-026579            │ jenkins │ v1.37.0 │ 26 Nov 25 20:23 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/26 20:23:11
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1126 20:23:11.440383  292013 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:23:11.440496  292013 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:23:11.440508  292013 out.go:374] Setting ErrFile to fd 2...
	I1126 20:23:11.440515  292013 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:23:11.440723  292013 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-10722/.minikube/bin
	I1126 20:23:11.441107  292013 out.go:368] Setting JSON to false
	I1126 20:23:11.442313  292013 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3941,"bootTime":1764184650,"procs":337,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1126 20:23:11.442360  292013 start.go:143] virtualization: kvm guest
	I1126 20:23:11.444216  292013 out.go:179] * [default-k8s-diff-port-178152] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1126 20:23:11.445318  292013 out.go:179]   - MINIKUBE_LOCATION=21974
	I1126 20:23:11.445306  292013 notify.go:221] Checking for updates...
	I1126 20:23:11.446480  292013 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1126 20:23:11.447697  292013 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21974-10722/kubeconfig
	I1126 20:23:11.448830  292013 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-10722/.minikube
	I1126 20:23:11.449874  292013 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1126 20:23:11.450880  292013 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1126 20:23:11.452236  292013 config.go:182] Loaded profile config "default-k8s-diff-port-178152": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:23:11.452747  292013 driver.go:422] Setting default libvirt URI to qemu:///system
	I1126 20:23:11.477152  292013 docker.go:124] docker version: linux-29.0.4:Docker Engine - Community
	I1126 20:23:11.477223  292013 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:23:11.530562  292013 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-26 20:23:11.521081776 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1126 20:23:11.530668  292013 docker.go:319] overlay module found
	I1126 20:23:11.532344  292013 out.go:179] * Using the docker driver based on existing profile
	I1126 20:23:11.533553  292013 start.go:309] selected driver: docker
	I1126 20:23:11.533572  292013 start.go:927] validating driver "docker" against &{Name:default-k8s-diff-port-178152 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-178152 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:23:11.533665  292013 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1126 20:23:11.534315  292013 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:23:11.590661  292013 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-26 20:23:11.581789918 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1126 20:23:11.590918  292013 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1126 20:23:11.590946  292013 cni.go:84] Creating CNI manager for ""
	I1126 20:23:11.590995  292013 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:23:11.591030  292013 start.go:353] cluster config:
	{Name:default-k8s-diff-port-178152 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-178152 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:23:11.592614  292013 out.go:179] * Starting "default-k8s-diff-port-178152" primary control-plane node in "default-k8s-diff-port-178152" cluster
	I1126 20:23:11.593974  292013 cache.go:134] Beginning downloading kic base image for docker with crio
	I1126 20:23:11.595037  292013 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1126 20:23:11.596046  292013 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:23:11.596075  292013 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21974-10722/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1126 20:23:11.596085  292013 cache.go:65] Caching tarball of preloaded images
	I1126 20:23:11.596139  292013 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1126 20:23:11.596167  292013 preload.go:238] Found /home/jenkins/minikube-integration/21974-10722/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1126 20:23:11.596174  292013 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1126 20:23:11.596261  292013 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/default-k8s-diff-port-178152/config.json ...
	I1126 20:23:11.615795  292013 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1126 20:23:11.615813  292013 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1126 20:23:11.615829  292013 cache.go:243] Successfully downloaded all kic artifacts
	I1126 20:23:11.615858  292013 start.go:360] acquireMachinesLock for default-k8s-diff-port-178152: {Name:mk205db4bd139b8853f3d786653274635beb61e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1126 20:23:11.615920  292013 start.go:364] duration metric: took 34.361µs to acquireMachinesLock for "default-k8s-diff-port-178152"
	I1126 20:23:11.615936  292013 start.go:96] Skipping create...Using existing machine configuration
	I1126 20:23:11.615941  292013 fix.go:54] fixHost starting: 
	I1126 20:23:11.616144  292013 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-178152 --format={{.State.Status}}
	I1126 20:23:11.633041  292013 fix.go:112] recreateIfNeeded on default-k8s-diff-port-178152: state=Stopped err=<nil>
	W1126 20:23:11.633069  292013 fix.go:138] unexpected machine state, will restart: <nil>
	W1126 20:23:08.695965  279050 pod_ready.go:104] pod "coredns-66bc5c9577-wl4xp" is not "Ready", error: <nil>
	W1126 20:23:11.193321  279050 pod_ready.go:104] pod "coredns-66bc5c9577-wl4xp" is not "Ready", error: <nil>
	W1126 20:23:09.709550  281230 pod_ready.go:104] pod "coredns-66bc5c9577-s8rrr" is not "Ready", error: <nil>
	W1126 20:23:11.709818  281230 pod_ready.go:104] pod "coredns-66bc5c9577-s8rrr" is not "Ready", error: <nil>
	I1126 20:23:08.134694  290654 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-825702 --name auto-825702 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-825702 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-825702 --network auto-825702 --ip 192.168.103.2 --volume auto-825702:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1126 20:23:08.459052  290654 cli_runner.go:164] Run: docker container inspect auto-825702 --format={{.State.Running}}
	I1126 20:23:08.476518  290654 cli_runner.go:164] Run: docker container inspect auto-825702 --format={{.State.Status}}
	I1126 20:23:08.493237  290654 cli_runner.go:164] Run: docker exec auto-825702 stat /var/lib/dpkg/alternatives/iptables
	I1126 20:23:08.540339  290654 oci.go:144] the created container "auto-825702" has a running status.
	I1126 20:23:08.540374  290654 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21974-10722/.minikube/machines/auto-825702/id_rsa...
	I1126 20:23:08.625248  290654 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21974-10722/.minikube/machines/auto-825702/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1126 20:23:08.653620  290654 cli_runner.go:164] Run: docker container inspect auto-825702 --format={{.State.Status}}
	I1126 20:23:08.671280  290654 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1126 20:23:08.671296  290654 kic_runner.go:114] Args: [docker exec --privileged auto-825702 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1126 20:23:08.732039  290654 cli_runner.go:164] Run: docker container inspect auto-825702 --format={{.State.Status}}
	I1126 20:23:08.755179  290654 machine.go:94] provisionDockerMachine start ...
	I1126 20:23:08.755285  290654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-825702
	I1126 20:23:08.780893  290654 main.go:143] libmachine: Using SSH client type: native
	I1126 20:23:08.781238  290654 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1126 20:23:08.781257  290654 main.go:143] libmachine: About to run SSH command:
	hostname
	I1126 20:23:08.782168  290654 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47660->127.0.0.1:33098: read: connection reset by peer
	I1126 20:23:11.933816  290654 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-825702
	
	I1126 20:23:11.933845  290654 ubuntu.go:182] provisioning hostname "auto-825702"
	I1126 20:23:11.933942  290654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-825702
	I1126 20:23:11.955152  290654 main.go:143] libmachine: Using SSH client type: native
	I1126 20:23:11.955427  290654 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1126 20:23:11.955445  290654 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-825702 && echo "auto-825702" | sudo tee /etc/hostname
	I1126 20:23:12.106616  290654 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-825702
	
	I1126 20:23:12.106688  290654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-825702
	I1126 20:23:12.126835  290654 main.go:143] libmachine: Using SSH client type: native
	I1126 20:23:12.127147  290654 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1126 20:23:12.127173  290654 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-825702' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-825702/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-825702' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1126 20:23:12.277739  290654 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1126 20:23:12.277766  290654 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21974-10722/.minikube CaCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21974-10722/.minikube}
	I1126 20:23:12.277789  290654 ubuntu.go:190] setting up certificates
	I1126 20:23:12.277804  290654 provision.go:84] configureAuth start
	I1126 20:23:12.277864  290654 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-825702
	I1126 20:23:12.295168  290654 provision.go:143] copyHostCerts
	I1126 20:23:12.295223  290654 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-10722/.minikube/ca.pem, removing ...
	I1126 20:23:12.295236  290654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-10722/.minikube/ca.pem
	I1126 20:23:12.295296  290654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/ca.pem (1078 bytes)
	I1126 20:23:12.295381  290654 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-10722/.minikube/cert.pem, removing ...
	I1126 20:23:12.295390  290654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-10722/.minikube/cert.pem
	I1126 20:23:12.295415  290654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/cert.pem (1123 bytes)
	I1126 20:23:12.295497  290654 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-10722/.minikube/key.pem, removing ...
	I1126 20:23:12.295506  290654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-10722/.minikube/key.pem
	I1126 20:23:12.295534  290654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/key.pem (1675 bytes)
	I1126 20:23:12.295591  290654 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem org=jenkins.auto-825702 san=[127.0.0.1 192.168.103.2 auto-825702 localhost minikube]
	I1126 20:23:12.321795  290654 provision.go:177] copyRemoteCerts
	I1126 20:23:12.321839  290654 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1126 20:23:12.321870  290654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-825702
	I1126 20:23:12.339200  290654 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/auto-825702/id_rsa Username:docker}
	I1126 20:23:12.437185  290654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1126 20:23:12.456201  290654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1126 20:23:12.472910  290654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1126 20:23:12.489243  290654 provision.go:87] duration metric: took 211.42653ms to configureAuth
	I1126 20:23:12.489265  290654 ubuntu.go:206] setting minikube options for container-runtime
	I1126 20:23:12.489416  290654 config.go:182] Loaded profile config "auto-825702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:23:12.489511  290654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-825702
	I1126 20:23:12.507582  290654 main.go:143] libmachine: Using SSH client type: native
	I1126 20:23:12.507780  290654 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1126 20:23:12.507796  290654 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1126 20:23:12.781449  290654 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1126 20:23:12.781491  290654 machine.go:97] duration metric: took 4.026285211s to provisionDockerMachine
	I1126 20:23:12.781503  290654 client.go:176] duration metric: took 9.486657251s to LocalClient.Create
	I1126 20:23:12.781520  290654 start.go:167] duration metric: took 9.48674154s to libmachine.API.Create "auto-825702"
	I1126 20:23:12.781527  290654 start.go:293] postStartSetup for "auto-825702" (driver="docker")
	I1126 20:23:12.781535  290654 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1126 20:23:12.781581  290654 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1126 20:23:12.781622  290654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-825702
	I1126 20:23:12.801338  290654 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/auto-825702/id_rsa Username:docker}
	I1126 20:23:12.900997  290654 ssh_runner.go:195] Run: cat /etc/os-release
	I1126 20:23:12.904439  290654 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1126 20:23:12.904478  290654 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1126 20:23:12.904490  290654 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-10722/.minikube/addons for local assets ...
	I1126 20:23:12.904539  290654 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-10722/.minikube/files for local assets ...
	I1126 20:23:12.904630  290654 filesync.go:149] local asset: /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem -> 142582.pem in /etc/ssl/certs
	I1126 20:23:12.904740  290654 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1126 20:23:12.912016  290654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem --> /etc/ssl/certs/142582.pem (1708 bytes)
	I1126 20:23:12.931277  290654 start.go:296] duration metric: took 149.73924ms for postStartSetup
	I1126 20:23:12.931620  290654 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-825702
	I1126 20:23:12.948897  290654 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/config.json ...
	I1126 20:23:12.949153  290654 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1126 20:23:12.949198  290654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-825702
	I1126 20:23:12.966056  290654 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/auto-825702/id_rsa Username:docker}
	I1126 20:23:13.061265  290654 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1126 20:23:13.065549  290654 start.go:128] duration metric: took 9.77265288s to createHost
	I1126 20:23:13.065569  290654 start.go:83] releasing machines lock for "auto-825702", held for 9.772807938s
	I1126 20:23:13.065624  290654 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-825702
	I1126 20:23:13.082987  290654 ssh_runner.go:195] Run: cat /version.json
	I1126 20:23:13.083045  290654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-825702
	I1126 20:23:13.083065  290654 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1126 20:23:13.083125  290654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-825702
	I1126 20:23:13.101098  290654 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/auto-825702/id_rsa Username:docker}
	I1126 20:23:13.101658  290654 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/auto-825702/id_rsa Username:docker}
	I1126 20:23:13.248108  290654 ssh_runner.go:195] Run: systemctl --version
	I1126 20:23:13.254244  290654 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1126 20:23:13.288072  290654 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1126 20:23:13.292438  290654 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1126 20:23:13.292520  290654 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1126 20:23:13.317258  290654 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1126 20:23:13.317277  290654 start.go:496] detecting cgroup driver to use...
	I1126 20:23:13.317301  290654 detect.go:190] detected "systemd" cgroup driver on host os
	I1126 20:23:13.317343  290654 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1126 20:23:13.332701  290654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1126 20:23:13.343996  290654 docker.go:218] disabling cri-docker service (if available) ...
	I1126 20:23:13.344063  290654 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1126 20:23:13.359920  290654 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1126 20:23:13.376200  290654 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1126 20:23:13.458202  290654 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1126 20:23:13.545058  290654 docker.go:234] disabling docker service ...
	I1126 20:23:13.545125  290654 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1126 20:23:13.563618  290654 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1126 20:23:13.575589  290654 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1126 20:23:13.659232  290654 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1126 20:23:13.741598  290654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1126 20:23:13.753230  290654 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1126 20:23:13.766347  290654 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1126 20:23:13.766400  290654 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:23:13.776320  290654 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1126 20:23:13.776363  290654 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:23:13.785041  290654 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:23:13.792995  290654 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:23:13.801178  290654 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1126 20:23:13.808838  290654 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:23:13.817198  290654 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:23:13.829677  290654 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:23:13.837756  290654 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1126 20:23:13.844718  290654 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1126 20:23:13.851623  290654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:23:13.929048  290654 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1126 20:23:14.058401  290654 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1126 20:23:14.058487  290654 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1126 20:23:14.062290  290654 start.go:564] Will wait 60s for crictl version
	I1126 20:23:14.062353  290654 ssh_runner.go:195] Run: which crictl
	I1126 20:23:14.065660  290654 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1126 20:23:14.091120  290654 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1126 20:23:14.091210  290654 ssh_runner.go:195] Run: crio --version
	I1126 20:23:14.117155  290654 ssh_runner.go:195] Run: crio --version
	I1126 20:23:14.145211  290654 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1126 20:23:14.146347  290654 cli_runner.go:164] Run: docker network inspect auto-825702 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1126 20:23:14.163312  290654 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1126 20:23:14.167143  290654 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:23:14.176842  290654 kubeadm.go:884] updating cluster {Name:auto-825702 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-825702 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1126 20:23:14.176954  290654 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:23:14.177008  290654 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:23:14.209406  290654 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:23:14.209426  290654 crio.go:433] Images already preloaded, skipping extraction
	I1126 20:23:14.209480  290654 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:23:14.233034  290654 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:23:14.233054  290654 cache_images.go:86] Images are preloaded, skipping loading
	I1126 20:23:14.233064  290654 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1126 20:23:14.233167  290654 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-825702 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-825702 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1126 20:23:14.233234  290654 ssh_runner.go:195] Run: crio config
	I1126 20:23:14.277192  290654 cni.go:84] Creating CNI manager for ""
	I1126 20:23:14.277214  290654 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:23:14.277232  290654 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1126 20:23:14.277262  290654 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-825702 NodeName:auto-825702 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1126 20:23:14.277404  290654 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-825702"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1126 20:23:14.277482  290654 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1126 20:23:14.285340  290654 binaries.go:51] Found k8s binaries, skipping transfer
	I1126 20:23:14.285386  290654 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1126 20:23:14.292836  290654 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1126 20:23:14.304841  290654 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1126 20:23:14.319148  290654 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1126 20:23:14.330598  290654 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1126 20:23:14.333692  290654 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:23:14.342648  290654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:23:14.418860  290654 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:23:14.441407  290654 certs.go:69] Setting up /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702 for IP: 192.168.103.2
	I1126 20:23:14.441425  290654 certs.go:195] generating shared ca certs ...
	I1126 20:23:14.441445  290654 certs.go:227] acquiring lock for ca certs: {Name:mk4795cc985d88cb969cdd6a9c35d3c72f02dfea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:23:14.441599  290654 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21974-10722/.minikube/ca.key
	I1126 20:23:14.441660  290654 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.key
	I1126 20:23:14.441675  290654 certs.go:257] generating profile certs ...
	I1126 20:23:14.441739  290654 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/client.key
	I1126 20:23:14.441756  290654 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/client.crt with IP's: []
	I1126 20:23:14.561248  290654 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/client.crt ...
	I1126 20:23:14.561273  290654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/client.crt: {Name:mka78bb7cd65f448b3a66a8ed3242d744cbd3ef3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:23:14.561443  290654 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/client.key ...
	I1126 20:23:14.561471  290654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/client.key: {Name:mk7e6b179f66f415078976ea7604686ca387360b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:23:14.561580  290654 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/apiserver.key.a3b7f4ac
	I1126 20:23:14.561598  290654 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/apiserver.crt.a3b7f4ac with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1126 20:23:14.653268  290654 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/apiserver.crt.a3b7f4ac ...
	I1126 20:23:14.653291  290654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/apiserver.crt.a3b7f4ac: {Name:mk19073e3da57c61475b1d8ab67fc8245bda1990 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:23:14.653426  290654 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/apiserver.key.a3b7f4ac ...
	I1126 20:23:14.653442  290654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/apiserver.key.a3b7f4ac: {Name:mk442d2e6204a99840a9704e9c26d0fbee8bfeb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:23:14.653547  290654 certs.go:382] copying /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/apiserver.crt.a3b7f4ac -> /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/apiserver.crt
	I1126 20:23:14.653646  290654 certs.go:386] copying /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/apiserver.key.a3b7f4ac -> /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/apiserver.key
	I1126 20:23:14.653728  290654 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/proxy-client.key
	I1126 20:23:14.653748  290654 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/proxy-client.crt with IP's: []
	I1126 20:23:14.813410  290654 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/proxy-client.crt ...
	I1126 20:23:14.813435  290654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/proxy-client.crt: {Name:mkde4786d8d21ddb4efdf9613c2ade685abc5c74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:23:14.813610  290654 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/proxy-client.key ...
	I1126 20:23:14.813627  290654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/proxy-client.key: {Name:mkd76ddd51996d4102db39f9558a24d218af9bc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:23:14.813815  290654 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/14258.pem (1338 bytes)
	W1126 20:23:14.813862  290654 certs.go:480] ignoring /home/jenkins/minikube-integration/21974-10722/.minikube/certs/14258_empty.pem, impossibly tiny 0 bytes
	I1126 20:23:14.813874  290654 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem (1679 bytes)
	I1126 20:23:14.813912  290654 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem (1078 bytes)
	I1126 20:23:14.813952  290654 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem (1123 bytes)
	I1126 20:23:14.814033  290654 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem (1675 bytes)
	I1126 20:23:14.814101  290654 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem (1708 bytes)
	I1126 20:23:14.814651  290654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1126 20:23:14.832871  290654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1126 20:23:14.849984  290654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1126 20:23:14.866732  290654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1126 20:23:14.882934  290654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1126 20:23:14.899525  290654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1126 20:23:14.915794  290654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1126 20:23:14.931980  290654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1126 20:23:14.947744  290654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/certs/14258.pem --> /usr/share/ca-certificates/14258.pem (1338 bytes)
	I1126 20:23:14.965250  290654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem --> /usr/share/ca-certificates/142582.pem (1708 bytes)
	I1126 20:23:14.981169  290654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1126 20:23:14.997436  290654 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1126 20:23:15.009230  290654 ssh_runner.go:195] Run: openssl version
	I1126 20:23:15.015235  290654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14258.pem && ln -fs /usr/share/ca-certificates/14258.pem /etc/ssl/certs/14258.pem"
	I1126 20:23:15.022951  290654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14258.pem
	I1126 20:23:15.026235  290654 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 26 19:41 /usr/share/ca-certificates/14258.pem
	I1126 20:23:15.026277  290654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14258.pem
	I1126 20:23:15.060428  290654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14258.pem /etc/ssl/certs/51391683.0"
	I1126 20:23:15.068547  290654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142582.pem && ln -fs /usr/share/ca-certificates/142582.pem /etc/ssl/certs/142582.pem"
	I1126 20:23:15.076572  290654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142582.pem
	I1126 20:23:15.080134  290654 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 26 19:41 /usr/share/ca-certificates/142582.pem
	I1126 20:23:15.080168  290654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142582.pem
	I1126 20:23:15.113444  290654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142582.pem /etc/ssl/certs/3ec20f2e.0"
	I1126 20:23:15.121470  290654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1126 20:23:15.129887  290654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:23:15.133669  290654 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 26 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:23:15.133717  290654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:23:15.170371  290654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1126 20:23:15.178639  290654 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1126 20:23:15.182230  290654 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1126 20:23:15.182288  290654 kubeadm.go:401] StartCluster: {Name:auto-825702 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-825702 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetC
lientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:23:15.182372  290654 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 20:23:15.182417  290654 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 20:23:15.213578  290654 cri.go:89] found id: ""
	I1126 20:23:15.213641  290654 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1126 20:23:15.221802  290654 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1126 20:23:15.229340  290654 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1126 20:23:15.229390  290654 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1126 20:23:15.236999  290654 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1126 20:23:15.237013  290654 kubeadm.go:158] found existing configuration files:
	
	I1126 20:23:15.237046  290654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1126 20:23:15.243974  290654 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1126 20:23:15.244013  290654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1126 20:23:15.250710  290654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1126 20:23:15.257608  290654 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1126 20:23:15.257644  290654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1126 20:23:15.264135  290654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1126 20:23:15.270882  290654 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1126 20:23:15.270929  290654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1126 20:23:15.277491  290654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1126 20:23:15.284739  290654 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1126 20:23:15.284784  290654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1126 20:23:15.292713  290654 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1126 20:23:15.331393  290654 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1126 20:23:15.331452  290654 kubeadm.go:319] [preflight] Running pre-flight checks
	I1126 20:23:15.349760  290654 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1126 20:23:15.349861  290654 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1126 20:23:15.349935  290654 kubeadm.go:319] OS: Linux
	I1126 20:23:15.350004  290654 kubeadm.go:319] CGROUPS_CPU: enabled
	I1126 20:23:15.350083  290654 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1126 20:23:15.350164  290654 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1126 20:23:15.350237  290654 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1126 20:23:15.350299  290654 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1126 20:23:15.350384  290654 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1126 20:23:15.350446  290654 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1126 20:23:15.350520  290654 kubeadm.go:319] CGROUPS_IO: enabled
	I1126 20:23:15.411648  290654 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1126 20:23:15.411792  290654 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1126 20:23:15.411920  290654 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1126 20:23:15.418763  290654 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1126 20:23:11.634593  292013 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-178152" ...
	I1126 20:23:11.634649  292013 cli_runner.go:164] Run: docker start default-k8s-diff-port-178152
	I1126 20:23:11.926041  292013 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-178152 --format={{.State.Status}}
	I1126 20:23:11.945873  292013 kic.go:430] container "default-k8s-diff-port-178152" state is running.
	I1126 20:23:11.946183  292013 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-178152
	I1126 20:23:11.965407  292013 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/default-k8s-diff-port-178152/config.json ...
	I1126 20:23:11.965672  292013 machine.go:94] provisionDockerMachine start ...
	I1126 20:23:11.965754  292013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-178152
	I1126 20:23:11.984253  292013 main.go:143] libmachine: Using SSH client type: native
	I1126 20:23:11.984606  292013 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1126 20:23:11.984627  292013 main.go:143] libmachine: About to run SSH command:
	hostname
	I1126 20:23:11.985310  292013 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53020->127.0.0.1:33103: read: connection reset by peer
	I1126 20:23:15.122812  292013 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-178152
	
	I1126 20:23:15.122840  292013 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-178152"
	I1126 20:23:15.122905  292013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-178152
	I1126 20:23:15.141545  292013 main.go:143] libmachine: Using SSH client type: native
	I1126 20:23:15.141743  292013 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1126 20:23:15.141756  292013 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-178152 && echo "default-k8s-diff-port-178152" | sudo tee /etc/hostname
	I1126 20:23:15.288999  292013 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-178152
	
	I1126 20:23:15.289074  292013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-178152
	I1126 20:23:15.307961  292013 main.go:143] libmachine: Using SSH client type: native
	I1126 20:23:15.308207  292013 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1126 20:23:15.308232  292013 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-178152' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-178152/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-178152' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1126 20:23:15.447684  292013 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1126 20:23:15.447708  292013 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21974-10722/.minikube CaCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21974-10722/.minikube}
	I1126 20:23:15.447742  292013 ubuntu.go:190] setting up certificates
	I1126 20:23:15.447753  292013 provision.go:84] configureAuth start
	I1126 20:23:15.447805  292013 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-178152
	I1126 20:23:15.466227  292013 provision.go:143] copyHostCerts
	I1126 20:23:15.466276  292013 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-10722/.minikube/ca.pem, removing ...
	I1126 20:23:15.466286  292013 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-10722/.minikube/ca.pem
	I1126 20:23:15.466350  292013 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/ca.pem (1078 bytes)
	I1126 20:23:15.466445  292013 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-10722/.minikube/cert.pem, removing ...
	I1126 20:23:15.466454  292013 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-10722/.minikube/cert.pem
	I1126 20:23:15.466520  292013 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/cert.pem (1123 bytes)
	I1126 20:23:15.466598  292013 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-10722/.minikube/key.pem, removing ...
	I1126 20:23:15.466607  292013 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-10722/.minikube/key.pem
	I1126 20:23:15.466632  292013 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/key.pem (1675 bytes)
	I1126 20:23:15.466694  292013 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-178152 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-178152 localhost minikube]
	I1126 20:23:15.723525  292013 provision.go:177] copyRemoteCerts
	I1126 20:23:15.723583  292013 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1126 20:23:15.723615  292013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-178152
	I1126 20:23:15.741675  292013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/default-k8s-diff-port-178152/id_rsa Username:docker}
	I1126 20:23:15.840142  292013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1126 20:23:15.856793  292013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1126 20:23:15.872789  292013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1126 20:23:15.889502  292013 provision.go:87] duration metric: took 441.73745ms to configureAuth
	I1126 20:23:15.889527  292013 ubuntu.go:206] setting minikube options for container-runtime
	I1126 20:23:15.889739  292013 config.go:182] Loaded profile config "default-k8s-diff-port-178152": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:23:15.889861  292013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-178152
	I1126 20:23:15.909189  292013 main.go:143] libmachine: Using SSH client type: native
	I1126 20:23:15.909493  292013 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1126 20:23:15.909522  292013 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1126 20:23:16.239537  292013 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1126 20:23:16.239562  292013 machine.go:97] duration metric: took 4.273873255s to provisionDockerMachine
	I1126 20:23:16.239577  292013 start.go:293] postStartSetup for "default-k8s-diff-port-178152" (driver="docker")
	I1126 20:23:16.239591  292013 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1126 20:23:16.239682  292013 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1126 20:23:16.239737  292013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-178152
	I1126 20:23:16.260385  292013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/default-k8s-diff-port-178152/id_rsa Username:docker}
	I1126 20:23:16.358126  292013 ssh_runner.go:195] Run: cat /etc/os-release
	I1126 20:23:16.361405  292013 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1126 20:23:16.361440  292013 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1126 20:23:16.361451  292013 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-10722/.minikube/addons for local assets ...
	I1126 20:23:16.361509  292013 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-10722/.minikube/files for local assets ...
	I1126 20:23:16.361599  292013 filesync.go:149] local asset: /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem -> 142582.pem in /etc/ssl/certs
	I1126 20:23:16.361707  292013 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1126 20:23:16.369023  292013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem --> /etc/ssl/certs/142582.pem (1708 bytes)
	I1126 20:23:16.385925  292013 start.go:296] duration metric: took 146.337148ms for postStartSetup
	I1126 20:23:16.385989  292013 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1126 20:23:16.386031  292013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-178152
	I1126 20:23:16.405445  292013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/default-k8s-diff-port-178152/id_rsa Username:docker}
	W1126 20:23:13.193401  279050 pod_ready.go:104] pod "coredns-66bc5c9577-wl4xp" is not "Ready", error: <nil>
	W1126 20:23:15.194538  279050 pod_ready.go:104] pod "coredns-66bc5c9577-wl4xp" is not "Ready", error: <nil>
	I1126 20:23:16.502288  292013 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1126 20:23:16.506679  292013 fix.go:56] duration metric: took 4.890731938s for fixHost
	I1126 20:23:16.506702  292013 start.go:83] releasing machines lock for "default-k8s-diff-port-178152", held for 4.890770543s
	I1126 20:23:16.506772  292013 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-178152
	I1126 20:23:16.524986  292013 ssh_runner.go:195] Run: cat /version.json
	I1126 20:23:16.525024  292013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-178152
	I1126 20:23:16.525076  292013 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1126 20:23:16.525147  292013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-178152
	I1126 20:23:16.543349  292013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/default-k8s-diff-port-178152/id_rsa Username:docker}
	I1126 20:23:16.544787  292013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/default-k8s-diff-port-178152/id_rsa Username:docker}
	I1126 20:23:16.710155  292013 ssh_runner.go:195] Run: systemctl --version
	I1126 20:23:16.717137  292013 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1126 20:23:16.751512  292013 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1126 20:23:16.755985  292013 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1126 20:23:16.756075  292013 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1126 20:23:16.763515  292013 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1126 20:23:16.763533  292013 start.go:496] detecting cgroup driver to use...
	I1126 20:23:16.763556  292013 detect.go:190] detected "systemd" cgroup driver on host os
	I1126 20:23:16.763596  292013 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1126 20:23:16.777637  292013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1126 20:23:16.789084  292013 docker.go:218] disabling cri-docker service (if available) ...
	I1126 20:23:16.789130  292013 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1126 20:23:16.802415  292013 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1126 20:23:16.814305  292013 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1126 20:23:16.894876  292013 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1126 20:23:16.973549  292013 docker.go:234] disabling docker service ...
	I1126 20:23:16.973602  292013 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1126 20:23:16.987105  292013 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1126 20:23:16.998823  292013 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1126 20:23:17.079192  292013 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1126 20:23:17.154663  292013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1126 20:23:17.166248  292013 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1126 20:23:17.179608  292013 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1126 20:23:17.179659  292013 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:23:17.187979  292013 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1126 20:23:17.188022  292013 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:23:17.197441  292013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:23:17.205620  292013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:23:17.214614  292013 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1126 20:23:17.222358  292013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:23:17.230646  292013 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:23:17.238512  292013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:23:17.246532  292013 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1126 20:23:17.253262  292013 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1126 20:23:17.260346  292013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:23:17.336611  292013 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1126 20:23:17.482298  292013 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1126 20:23:17.482365  292013 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1126 20:23:17.487204  292013 start.go:564] Will wait 60s for crictl version
	I1126 20:23:17.487266  292013 ssh_runner.go:195] Run: which crictl
	I1126 20:23:17.490714  292013 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1126 20:23:17.516962  292013 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1126 20:23:17.517029  292013 ssh_runner.go:195] Run: crio --version
	I1126 20:23:17.546625  292013 ssh_runner.go:195] Run: crio --version
	I1126 20:23:17.576514  292013 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	W1126 20:23:14.209234  281230 pod_ready.go:104] pod "coredns-66bc5c9577-s8rrr" is not "Ready", error: <nil>
	W1126 20:23:16.209528  281230 pod_ready.go:104] pod "coredns-66bc5c9577-s8rrr" is not "Ready", error: <nil>
	I1126 20:23:15.421393  290654 out.go:252]   - Generating certificates and keys ...
	I1126 20:23:15.421469  290654 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1126 20:23:15.421584  290654 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1126 20:23:15.901705  290654 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1126 20:23:16.198158  290654 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1126 20:23:16.755333  290654 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1126 20:23:16.910521  290654 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1126 20:23:17.293843  290654 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1126 20:23:17.294078  290654 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-825702 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1126 20:23:18.053504  290654 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1126 20:23:18.053707  290654 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-825702 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1126 20:23:17.577646  292013 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-178152 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1126 20:23:17.596268  292013 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1126 20:23:17.600505  292013 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:23:17.610494  292013 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-178152 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-178152 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1126 20:23:17.610599  292013 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:23:17.610638  292013 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:23:17.642078  292013 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:23:17.642098  292013 crio.go:433] Images already preloaded, skipping extraction
	I1126 20:23:17.642144  292013 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:23:17.668002  292013 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:23:17.668024  292013 cache_images.go:86] Images are preloaded, skipping loading
	I1126 20:23:17.668033  292013 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1126 20:23:17.668159  292013 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-178152 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-178152 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1126 20:23:17.668231  292013 ssh_runner.go:195] Run: crio config
	I1126 20:23:17.728730  292013 cni.go:84] Creating CNI manager for ""
	I1126 20:23:17.728745  292013 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:23:17.728757  292013 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1126 20:23:17.728780  292013 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-178152 NodeName:default-k8s-diff-port-178152 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1126 20:23:17.728904  292013 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-178152"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1126 20:23:17.728961  292013 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1126 20:23:17.737340  292013 binaries.go:51] Found k8s binaries, skipping transfer
	I1126 20:23:17.737397  292013 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1126 20:23:17.744823  292013 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1126 20:23:17.757195  292013 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1126 20:23:17.769202  292013 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1126 20:23:17.782349  292013 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1126 20:23:17.786032  292013 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:23:17.795101  292013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:23:17.873013  292013 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:23:17.897757  292013 certs.go:69] Setting up /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/default-k8s-diff-port-178152 for IP: 192.168.85.2
	I1126 20:23:17.897775  292013 certs.go:195] generating shared ca certs ...
	I1126 20:23:17.897795  292013 certs.go:227] acquiring lock for ca certs: {Name:mk4795cc985d88cb969cdd6a9c35d3c72f02dfea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:23:17.897932  292013 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21974-10722/.minikube/ca.key
	I1126 20:23:17.897986  292013 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.key
	I1126 20:23:17.898001  292013 certs.go:257] generating profile certs ...
	I1126 20:23:17.898093  292013 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/default-k8s-diff-port-178152/client.key
	I1126 20:23:17.898162  292013 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/default-k8s-diff-port-178152/apiserver.key.e0e0c015
	I1126 20:23:17.898218  292013 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/default-k8s-diff-port-178152/proxy-client.key
	I1126 20:23:17.898357  292013 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/14258.pem (1338 bytes)
	W1126 20:23:17.898403  292013 certs.go:480] ignoring /home/jenkins/minikube-integration/21974-10722/.minikube/certs/14258_empty.pem, impossibly tiny 0 bytes
	I1126 20:23:17.898418  292013 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem (1679 bytes)
	I1126 20:23:17.898486  292013 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem (1078 bytes)
	I1126 20:23:17.898527  292013 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem (1123 bytes)
	I1126 20:23:17.898563  292013 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem (1675 bytes)
	I1126 20:23:17.898625  292013 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem (1708 bytes)
	I1126 20:23:17.899165  292013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1126 20:23:17.918784  292013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1126 20:23:17.937235  292013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1126 20:23:17.955598  292013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1126 20:23:17.978718  292013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/default-k8s-diff-port-178152/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1126 20:23:17.998328  292013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/default-k8s-diff-port-178152/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1126 20:23:18.014824  292013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/default-k8s-diff-port-178152/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1126 20:23:18.030942  292013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/default-k8s-diff-port-178152/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1126 20:23:18.047085  292013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem --> /usr/share/ca-certificates/142582.pem (1708 bytes)
	I1126 20:23:18.063322  292013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1126 20:23:18.079509  292013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/certs/14258.pem --> /usr/share/ca-certificates/14258.pem (1338 bytes)
	I1126 20:23:18.098732  292013 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1126 20:23:18.110426  292013 ssh_runner.go:195] Run: openssl version
	I1126 20:23:18.116110  292013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142582.pem && ln -fs /usr/share/ca-certificates/142582.pem /etc/ssl/certs/142582.pem"
	I1126 20:23:18.124052  292013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142582.pem
	I1126 20:23:18.127654  292013 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 26 19:41 /usr/share/ca-certificates/142582.pem
	I1126 20:23:18.127698  292013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142582.pem
	I1126 20:23:18.162629  292013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142582.pem /etc/ssl/certs/3ec20f2e.0"
	I1126 20:23:18.170348  292013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1126 20:23:18.178740  292013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:23:18.182764  292013 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 26 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:23:18.182806  292013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:23:18.234882  292013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1126 20:23:18.245881  292013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14258.pem && ln -fs /usr/share/ca-certificates/14258.pem /etc/ssl/certs/14258.pem"
	I1126 20:23:18.255606  292013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14258.pem
	I1126 20:23:18.259552  292013 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 26 19:41 /usr/share/ca-certificates/14258.pem
	I1126 20:23:18.259605  292013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14258.pem
	I1126 20:23:18.303253  292013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14258.pem /etc/ssl/certs/51391683.0"
	I1126 20:23:18.312096  292013 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1126 20:23:18.316008  292013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1126 20:23:18.350634  292013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1126 20:23:18.384786  292013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1126 20:23:18.431599  292013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1126 20:23:18.475753  292013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1126 20:23:18.526391  292013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1126 20:23:18.588346  292013 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-178152 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-178152 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:23:18.588449  292013 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 20:23:18.588565  292013 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 20:23:18.623451  292013 cri.go:89] found id: "851ab28993a8b771e5c0a2f66a99e6f22fbce78ae80259d843eb935540e459cb"
	I1126 20:23:18.623493  292013 cri.go:89] found id: "cd9d1e44673562de24369667807437c6a3c635356a7d55a049742bc674b8d8bb"
	I1126 20:23:18.623506  292013 cri.go:89] found id: "45aa87e14b73bdfe289340af54adbd31ea4129fb3bbd2a635ed0f95587efea73"
	I1126 20:23:18.623512  292013 cri.go:89] found id: "53d64e031f1a8bdf5d5b01443fbdf38f958b0ba7ea8c6514663f714eb5cedebf"
	I1126 20:23:18.623516  292013 cri.go:89] found id: ""
	I1126 20:23:18.623557  292013 ssh_runner.go:195] Run: sudo runc list -f json
	W1126 20:23:18.638001  292013 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:23:18Z" level=error msg="open /run/runc: no such file or directory"
	I1126 20:23:18.638079  292013 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1126 20:23:18.647325  292013 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1126 20:23:18.647339  292013 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1126 20:23:18.647376  292013 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1126 20:23:18.655974  292013 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1126 20:23:18.657075  292013 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-178152" does not appear in /home/jenkins/minikube-integration/21974-10722/kubeconfig
	I1126 20:23:18.657847  292013 kubeconfig.go:62] /home/jenkins/minikube-integration/21974-10722/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-178152" cluster setting kubeconfig missing "default-k8s-diff-port-178152" context setting]
	I1126 20:23:18.658988  292013 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/kubeconfig: {Name:mk6fc897a3190afd853d8f239c80394df10dbd39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:23:18.661117  292013 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1126 20:23:18.670104  292013 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1126 20:23:18.670133  292013 kubeadm.go:602] duration metric: took 22.788009ms to restartPrimaryControlPlane
	I1126 20:23:18.670142  292013 kubeadm.go:403] duration metric: took 81.823346ms to StartCluster
	I1126 20:23:18.670155  292013 settings.go:142] acquiring lock: {Name:mkeab3f1dbcfb88a16fda32bff13c520a9a811ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:23:18.670212  292013 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21974-10722/kubeconfig
	I1126 20:23:18.672246  292013 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/kubeconfig: {Name:mk6fc897a3190afd853d8f239c80394df10dbd39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:23:18.672794  292013 config.go:182] Loaded profile config "default-k8s-diff-port-178152": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:23:18.672844  292013 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1126 20:23:18.672980  292013 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1126 20:23:18.673056  292013 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-178152"
	I1126 20:23:18.673072  292013 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-178152"
	W1126 20:23:18.673080  292013 addons.go:248] addon storage-provisioner should already be in state true
	I1126 20:23:18.673108  292013 host.go:66] Checking if "default-k8s-diff-port-178152" exists ...
	I1126 20:23:18.673596  292013 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-178152 --format={{.State.Status}}
	I1126 20:23:18.673682  292013 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-178152"
	I1126 20:23:18.673710  292013 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-178152"
	W1126 20:23:18.673722  292013 addons.go:248] addon dashboard should already be in state true
	I1126 20:23:18.673756  292013 host.go:66] Checking if "default-k8s-diff-port-178152" exists ...
	I1126 20:23:18.673928  292013 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-178152"
	I1126 20:23:18.673951  292013 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-178152"
	I1126 20:23:18.674245  292013 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-178152 --format={{.State.Status}}
	I1126 20:23:18.674255  292013 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-178152 --format={{.State.Status}}
	I1126 20:23:18.675133  292013 out.go:179] * Verifying Kubernetes components...
	I1126 20:23:18.676193  292013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:23:18.701859  292013 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1126 20:23:18.701926  292013 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1126 20:23:18.703240  292013 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:23:18.703295  292013 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1126 20:23:18.703350  292013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-178152
	I1126 20:23:18.703745  292013 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1126 20:23:17.693567  279050 pod_ready.go:94] pod "coredns-66bc5c9577-wl4xp" is "Ready"
	I1126 20:23:17.693591  279050 pod_ready.go:86] duration metric: took 35.505181868s for pod "coredns-66bc5c9577-wl4xp" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:17.696095  279050 pod_ready.go:83] waiting for pod "etcd-no-preload-026579" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:17.713173  279050 pod_ready.go:94] pod "etcd-no-preload-026579" is "Ready"
	I1126 20:23:17.713232  279050 pod_ready.go:86] duration metric: took 17.078305ms for pod "etcd-no-preload-026579" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:17.718741  279050 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-026579" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:17.723995  279050 pod_ready.go:94] pod "kube-apiserver-no-preload-026579" is "Ready"
	I1126 20:23:17.724017  279050 pod_ready.go:86] duration metric: took 5.252182ms for pod "kube-apiserver-no-preload-026579" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:17.726428  279050 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-026579" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:17.894598  279050 pod_ready.go:94] pod "kube-controller-manager-no-preload-026579" is "Ready"
	I1126 20:23:17.894629  279050 pod_ready.go:86] duration metric: took 168.177715ms for pod "kube-controller-manager-no-preload-026579" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:18.091824  279050 pod_ready.go:83] waiting for pod "kube-proxy-ktbwp" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:18.492571  279050 pod_ready.go:94] pod "kube-proxy-ktbwp" is "Ready"
	I1126 20:23:18.492601  279050 pod_ready.go:86] duration metric: took 400.748457ms for pod "kube-proxy-ktbwp" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:18.693343  279050 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-026579" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:19.091809  279050 pod_ready.go:94] pod "kube-scheduler-no-preload-026579" is "Ready"
	I1126 20:23:19.091845  279050 pod_ready.go:86] duration metric: took 398.476699ms for pod "kube-scheduler-no-preload-026579" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:19.091860  279050 pod_ready.go:40] duration metric: took 36.906405377s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 20:23:19.153238  279050 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1126 20:23:19.155165  279050 out.go:179] * Done! kubectl is now configured to use "no-preload-026579" cluster and "default" namespace by default
	I1126 20:23:18.705569  292013 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-178152"
	W1126 20:23:18.705587  292013 addons.go:248] addon default-storageclass should already be in state true
	I1126 20:23:18.705612  292013 host.go:66] Checking if "default-k8s-diff-port-178152" exists ...
	I1126 20:23:18.706157  292013 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-178152 --format={{.State.Status}}
	I1126 20:23:18.706657  292013 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1126 20:23:18.706739  292013 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1126 20:23:18.706807  292013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-178152
	I1126 20:23:18.739172  292013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/default-k8s-diff-port-178152/id_rsa Username:docker}
	I1126 20:23:18.742081  292013 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1126 20:23:18.742144  292013 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1126 20:23:18.742203  292013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-178152
	I1126 20:23:18.743125  292013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/default-k8s-diff-port-178152/id_rsa Username:docker}
	I1126 20:23:18.771281  292013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/default-k8s-diff-port-178152/id_rsa Username:docker}
	I1126 20:23:18.836808  292013 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:23:18.849945  292013 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-178152" to be "Ready" ...
	I1126 20:23:18.858581  292013 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1126 20:23:18.858600  292013 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1126 20:23:18.864903  292013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:23:18.873985  292013 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1126 20:23:18.874003  292013 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1126 20:23:18.891785  292013 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1126 20:23:18.891800  292013 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1126 20:23:18.898868  292013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1126 20:23:18.914781  292013 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1126 20:23:18.914799  292013 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1126 20:23:18.940507  292013 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1126 20:23:18.940588  292013 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1126 20:23:18.961370  292013 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1126 20:23:18.961480  292013 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1126 20:23:18.979847  292013 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1126 20:23:18.979869  292013 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1126 20:23:18.997450  292013 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1126 20:23:18.997496  292013 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1126 20:23:19.014774  292013 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1126 20:23:19.014798  292013 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1126 20:23:19.030627  292013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1126 20:23:20.076513  292013 node_ready.go:49] node "default-k8s-diff-port-178152" is "Ready"
	I1126 20:23:20.076545  292013 node_ready.go:38] duration metric: took 1.226568266s for node "default-k8s-diff-port-178152" to be "Ready" ...
	I1126 20:23:20.076561  292013 api_server.go:52] waiting for apiserver process to appear ...
	I1126 20:23:20.076614  292013 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:23:20.650346  292013 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.785411832s)
	I1126 20:23:20.650423  292013 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.751538841s)
	I1126 20:23:20.650697  292013 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.620030532s)
	I1126 20:23:20.650744  292013 api_server.go:72] duration metric: took 1.977874686s to wait for apiserver process to appear ...
	I1126 20:23:20.650766  292013 api_server.go:88] waiting for apiserver healthz status ...
	I1126 20:23:20.650789  292013 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1126 20:23:20.652272  292013 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-178152 addons enable metrics-server
	
	I1126 20:23:20.655372  292013 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1126 20:23:20.655401  292013 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1126 20:23:20.659424  292013 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1126 20:23:20.660341  292013 addons.go:530] duration metric: took 1.987365178s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1126 20:23:21.151632  292013 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1126 20:23:21.157395  292013 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1126 20:23:21.157415  292013 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1126 20:23:18.885333  290654 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1126 20:23:19.301808  290654 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1126 20:23:19.695191  290654 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1126 20:23:19.695440  290654 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1126 20:23:19.825600  290654 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1126 20:23:20.340649  290654 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1126 20:23:20.724366  290654 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1126 20:23:21.485824  290654 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1126 20:23:21.625826  290654 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1126 20:23:21.626296  290654 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1126 20:23:21.629820  290654 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1126 20:23:18.214208  281230 pod_ready.go:104] pod "coredns-66bc5c9577-s8rrr" is not "Ready", error: <nil>
	W1126 20:23:20.709235  281230 pod_ready.go:104] pod "coredns-66bc5c9577-s8rrr" is not "Ready", error: <nil>
	I1126 20:23:21.631094  290654 out.go:252]   - Booting up control plane ...
	I1126 20:23:21.631238  290654 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1126 20:23:21.631371  290654 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1126 20:23:21.632360  290654 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1126 20:23:21.645214  290654 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1126 20:23:21.645361  290654 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1126 20:23:21.652406  290654 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1126 20:23:21.652729  290654 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1126 20:23:21.652815  290654 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1126 20:23:21.764903  290654 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1126 20:23:21.765102  290654 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1126 20:23:22.766639  290654 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001829324s
	I1126 20:23:22.771587  290654 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1126 20:23:22.771713  290654 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1126 20:23:22.771850  290654 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1126 20:23:22.771976  290654 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1126 20:23:22.710254  281230 pod_ready.go:104] pod "coredns-66bc5c9577-s8rrr" is not "Ready", error: <nil>
	I1126 20:23:23.209497  281230 pod_ready.go:94] pod "coredns-66bc5c9577-s8rrr" is "Ready"
	I1126 20:23:23.209526  281230 pod_ready.go:86] duration metric: took 35.005140298s for pod "coredns-66bc5c9577-s8rrr" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:23.212056  281230 pod_ready.go:83] waiting for pod "etcd-embed-certs-949294" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:23.215893  281230 pod_ready.go:94] pod "etcd-embed-certs-949294" is "Ready"
	I1126 20:23:23.215912  281230 pod_ready.go:86] duration metric: took 3.835439ms for pod "etcd-embed-certs-949294" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:23.217794  281230 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-949294" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:23.221490  281230 pod_ready.go:94] pod "kube-apiserver-embed-certs-949294" is "Ready"
	I1126 20:23:23.221507  281230 pod_ready.go:86] duration metric: took 3.693704ms for pod "kube-apiserver-embed-certs-949294" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:23.223412  281230 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-949294" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:23.408291  281230 pod_ready.go:94] pod "kube-controller-manager-embed-certs-949294" is "Ready"
	I1126 20:23:23.408318  281230 pod_ready.go:86] duration metric: took 184.882309ms for pod "kube-controller-manager-embed-certs-949294" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:23.608513  281230 pod_ready.go:83] waiting for pod "kube-proxy-qnjvr" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:24.008474  281230 pod_ready.go:94] pod "kube-proxy-qnjvr" is "Ready"
	I1126 20:23:24.008506  281230 pod_ready.go:86] duration metric: took 399.965276ms for pod "kube-proxy-qnjvr" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:24.207557  281230 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-949294" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:24.607949  281230 pod_ready.go:94] pod "kube-scheduler-embed-certs-949294" is "Ready"
	I1126 20:23:24.607973  281230 pod_ready.go:86] duration metric: took 400.390059ms for pod "kube-scheduler-embed-certs-949294" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:24.607985  281230 pod_ready.go:40] duration metric: took 36.408614043s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 20:23:24.660574  281230 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1126 20:23:24.662064  281230 out.go:179] * Done! kubectl is now configured to use "embed-certs-949294" cluster and "default" namespace by default
	I1126 20:23:21.651516  292013 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1126 20:23:21.655923  292013 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1126 20:23:21.656902  292013 api_server.go:141] control plane version: v1.34.1
	I1126 20:23:21.656929  292013 api_server.go:131] duration metric: took 1.00615123s to wait for apiserver health ...
	I1126 20:23:21.656939  292013 system_pods.go:43] waiting for kube-system pods to appear ...
	I1126 20:23:21.660424  292013 system_pods.go:59] 8 kube-system pods found
	I1126 20:23:21.660494  292013 system_pods.go:61] "coredns-66bc5c9577-tpmmm" [20166f90-76ba-4092-aab9-29683f4fc146] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:23:21.660509  292013 system_pods.go:61] "etcd-default-k8s-diff-port-178152" [b1b7fbf2-3a62-4441-9f2a-1fc512882320] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1126 20:23:21.660522  292013 system_pods.go:61] "kindnet-bmzz2" [ad4ae092-70cd-48b0-9099-854ccce3329d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1126 20:23:21.660530  292013 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-178152" [57a5fbc3-1a74-4cd9-af68-a9cb4053ee40] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1126 20:23:21.660541  292013 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-178152" [41dc1851-5cc2-414e-8ef8-b25e098649c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1126 20:23:21.660553  292013 system_pods.go:61] "kube-proxy-vd7fp" [37371ddf-6fde-4f46-a877-97f4112ff1b2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1126 20:23:21.660563  292013 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-178152" [f1a6a26d-5146-4ffc-9bb3-20349686988f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1126 20:23:21.660573  292013 system_pods.go:61] "storage-provisioner" [0ed42547-a316-4970-b4e7-f2157c68ac06] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 20:23:21.660578  292013 system_pods.go:74] duration metric: took 3.633523ms to wait for pod list to return data ...
	I1126 20:23:21.660586  292013 default_sa.go:34] waiting for default service account to be created ...
	I1126 20:23:21.662722  292013 default_sa.go:45] found service account: "default"
	I1126 20:23:21.662739  292013 default_sa.go:55] duration metric: took 2.147793ms for default service account to be created ...
	I1126 20:23:21.662747  292013 system_pods.go:116] waiting for k8s-apps to be running ...
	I1126 20:23:21.665171  292013 system_pods.go:86] 8 kube-system pods found
	I1126 20:23:21.665193  292013 system_pods.go:89] "coredns-66bc5c9577-tpmmm" [20166f90-76ba-4092-aab9-29683f4fc146] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:23:21.665209  292013 system_pods.go:89] "etcd-default-k8s-diff-port-178152" [b1b7fbf2-3a62-4441-9f2a-1fc512882320] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1126 20:23:21.665224  292013 system_pods.go:89] "kindnet-bmzz2" [ad4ae092-70cd-48b0-9099-854ccce3329d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1126 20:23:21.665236  292013 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-178152" [57a5fbc3-1a74-4cd9-af68-a9cb4053ee40] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1126 20:23:21.665250  292013 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-178152" [41dc1851-5cc2-414e-8ef8-b25e098649c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1126 20:23:21.665260  292013 system_pods.go:89] "kube-proxy-vd7fp" [37371ddf-6fde-4f46-a877-97f4112ff1b2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1126 20:23:21.665271  292013 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-178152" [f1a6a26d-5146-4ffc-9bb3-20349686988f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1126 20:23:21.665282  292013 system_pods.go:89] "storage-provisioner" [0ed42547-a316-4970-b4e7-f2157c68ac06] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 20:23:21.665293  292013 system_pods.go:126] duration metric: took 2.539795ms to wait for k8s-apps to be running ...
	I1126 20:23:21.665305  292013 system_svc.go:44] waiting for kubelet service to be running ....
	I1126 20:23:21.665350  292013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:23:21.679704  292013 system_svc.go:56] duration metric: took 14.393906ms WaitForService to wait for kubelet
	I1126 20:23:21.679732  292013 kubeadm.go:587] duration metric: took 3.006859665s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1126 20:23:21.679763  292013 node_conditions.go:102] verifying NodePressure condition ...
	I1126 20:23:21.683714  292013 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1126 20:23:21.683746  292013 node_conditions.go:123] node cpu capacity is 8
	I1126 20:23:21.683761  292013 node_conditions.go:105] duration metric: took 3.992542ms to run NodePressure ...
	I1126 20:23:21.683776  292013 start.go:242] waiting for startup goroutines ...
	I1126 20:23:21.683787  292013 start.go:247] waiting for cluster config update ...
	I1126 20:23:21.683803  292013 start.go:256] writing updated cluster config ...
	I1126 20:23:21.684090  292013 ssh_runner.go:195] Run: rm -f paused
	I1126 20:23:21.690737  292013 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 20:23:21.694957  292013 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-tpmmm" in "kube-system" namespace to be "Ready" or be gone ...
	W1126 20:23:23.700019  292013 pod_ready.go:104] pod "coredns-66bc5c9577-tpmmm" is not "Ready", error: <nil>
	W1126 20:23:25.700369  292013 pod_ready.go:104] pod "coredns-66bc5c9577-tpmmm" is not "Ready", error: <nil>
	I1126 20:23:24.334563  290654 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.562900759s
	I1126 20:23:25.096556  290654 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.32501021s
	I1126 20:23:26.773126  290654 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001480312s
	I1126 20:23:26.785982  290654 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1126 20:23:26.795346  290654 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1126 20:23:26.803771  290654 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1126 20:23:26.804039  290654 kubeadm.go:319] [mark-control-plane] Marking the node auto-825702 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1126 20:23:26.811540  290654 kubeadm.go:319] [bootstrap-token] Using token: cfepsv.ze7li0ueqiisv4u1
	I1126 20:23:26.812735  290654 out.go:252]   - Configuring RBAC rules ...
	I1126 20:23:26.812902  290654 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1126 20:23:26.815933  290654 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1126 20:23:26.822050  290654 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1126 20:23:26.824581  290654 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1126 20:23:26.827827  290654 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1126 20:23:26.830088  290654 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1126 20:23:27.180011  290654 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1126 20:23:27.604196  290654 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1126 20:23:28.179792  290654 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1126 20:23:28.181097  290654 kubeadm.go:319] 
	I1126 20:23:28.181178  290654 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1126 20:23:28.181188  290654 kubeadm.go:319] 
	I1126 20:23:28.181271  290654 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1126 20:23:28.181284  290654 kubeadm.go:319] 
	I1126 20:23:28.181314  290654 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1126 20:23:28.181393  290654 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1126 20:23:28.181508  290654 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1126 20:23:28.181518  290654 kubeadm.go:319] 
	I1126 20:23:28.181588  290654 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1126 20:23:28.181599  290654 kubeadm.go:319] 
	I1126 20:23:28.181662  290654 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1126 20:23:28.181671  290654 kubeadm.go:319] 
	I1126 20:23:28.181740  290654 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1126 20:23:28.181890  290654 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1126 20:23:28.181992  290654 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1126 20:23:28.182004  290654 kubeadm.go:319] 
	I1126 20:23:28.182118  290654 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1126 20:23:28.182257  290654 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1126 20:23:28.182277  290654 kubeadm.go:319] 
	I1126 20:23:28.182389  290654 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token cfepsv.ze7li0ueqiisv4u1 \
	I1126 20:23:28.182607  290654 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:9cd90c79a54686c211eb288f3e905de978807b759c474caa965189d211d6551b \
	I1126 20:23:28.182665  290654 kubeadm.go:319] 	--control-plane 
	I1126 20:23:28.182683  290654 kubeadm.go:319] 
	I1126 20:23:28.182781  290654 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1126 20:23:28.182794  290654 kubeadm.go:319] 
	I1126 20:23:28.182920  290654 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token cfepsv.ze7li0ueqiisv4u1 \
	I1126 20:23:28.183058  290654 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:9cd90c79a54686c211eb288f3e905de978807b759c474caa965189d211d6551b 
	I1126 20:23:28.186330  290654 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1126 20:23:28.186520  290654 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1126 20:23:28.186555  290654 cni.go:84] Creating CNI manager for ""
	I1126 20:23:28.186568  290654 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:23:28.189613  290654 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1126 20:23:27.701613  292013 pod_ready.go:104] pod "coredns-66bc5c9577-tpmmm" is not "Ready", error: <nil>
	W1126 20:23:30.200387  292013 pod_ready.go:104] pod "coredns-66bc5c9577-tpmmm" is not "Ready", error: <nil>
	I1126 20:23:28.190997  290654 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1126 20:23:28.196682  290654 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1126 20:23:28.196700  290654 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1126 20:23:28.212764  290654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1126 20:23:28.451574  290654 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1126 20:23:28.451657  290654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:23:28.451736  290654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-825702 minikube.k8s.io/updated_at=2025_11_26T20_23_28_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970 minikube.k8s.io/name=auto-825702 minikube.k8s.io/primary=true
	I1126 20:23:28.594872  290654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:23:28.594872  290654 ops.go:34] apiserver oom_adj: -16
	I1126 20:23:29.095986  290654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:23:29.595806  290654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:23:30.095675  290654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:23:30.595668  290654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:23:31.095846  290654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:23:31.595663  290654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:23:32.095085  290654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:23:32.595453  290654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:23:33.095688  290654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:23:33.186891  290654 kubeadm.go:1114] duration metric: took 4.735299943s to wait for elevateKubeSystemPrivileges
	I1126 20:23:33.187041  290654 kubeadm.go:403] duration metric: took 18.004754645s to StartCluster
	I1126 20:23:33.187069  290654 settings.go:142] acquiring lock: {Name:mkeab3f1dbcfb88a16fda32bff13c520a9a811ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:23:33.187159  290654 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21974-10722/kubeconfig
	I1126 20:23:33.189959  290654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/kubeconfig: {Name:mk6fc897a3190afd853d8f239c80394df10dbd39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:23:33.190264  290654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1126 20:23:33.190276  290654 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1126 20:23:33.190348  290654 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1126 20:23:33.190418  290654 addons.go:70] Setting storage-provisioner=true in profile "auto-825702"
	I1126 20:23:33.190430  290654 addons.go:239] Setting addon storage-provisioner=true in "auto-825702"
	I1126 20:23:33.190452  290654 host.go:66] Checking if "auto-825702" exists ...
	I1126 20:23:33.190569  290654 config.go:182] Loaded profile config "auto-825702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:23:33.190656  290654 addons.go:70] Setting default-storageclass=true in profile "auto-825702"
	I1126 20:23:33.190674  290654 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-825702"
	I1126 20:23:33.190997  290654 cli_runner.go:164] Run: docker container inspect auto-825702 --format={{.State.Status}}
	I1126 20:23:33.191067  290654 cli_runner.go:164] Run: docker container inspect auto-825702 --format={{.State.Status}}
	I1126 20:23:33.191443  290654 out.go:179] * Verifying Kubernetes components...
	I1126 20:23:33.193928  290654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:23:33.219111  290654 addons.go:239] Setting addon default-storageclass=true in "auto-825702"
	I1126 20:23:33.219159  290654 host.go:66] Checking if "auto-825702" exists ...
	I1126 20:23:33.219759  290654 cli_runner.go:164] Run: docker container inspect auto-825702 --format={{.State.Status}}
	I1126 20:23:33.220320  290654 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1126 20:23:33.221988  290654 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:23:33.222009  290654 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1126 20:23:33.222065  290654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-825702
	I1126 20:23:33.245865  290654 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/auto-825702/id_rsa Username:docker}
	I1126 20:23:33.249499  290654 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1126 20:23:33.249519  290654 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1126 20:23:33.249591  290654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-825702
	I1126 20:23:33.273726  290654 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/auto-825702/id_rsa Username:docker}
	I1126 20:23:33.288579  290654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1126 20:23:33.352015  290654 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:23:33.372793  290654 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:23:33.394803  290654 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1126 20:23:33.477827  290654 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1126 20:23:33.480376  290654 node_ready.go:35] waiting up to 15m0s for node "auto-825702" to be "Ready" ...
	I1126 20:23:33.693090  290654 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	
	
	==> CRI-O <==
	Nov 26 20:23:22 no-preload-026579 crio[566]: time="2025-11-26T20:23:22.269509096Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 26 20:23:22 no-preload-026579 crio[566]: time="2025-11-26T20:23:22.269541179Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 26 20:23:22 no-preload-026579 crio[566]: time="2025-11-26T20:23:22.269564678Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 26 20:23:22 no-preload-026579 crio[566]: time="2025-11-26T20:23:22.275264124Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 26 20:23:22 no-preload-026579 crio[566]: time="2025-11-26T20:23:22.275295501Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 26 20:23:22 no-preload-026579 crio[566]: time="2025-11-26T20:23:22.275321955Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 26 20:23:22 no-preload-026579 crio[566]: time="2025-11-26T20:23:22.279381026Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 26 20:23:22 no-preload-026579 crio[566]: time="2025-11-26T20:23:22.279404988Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 26 20:23:22 no-preload-026579 crio[566]: time="2025-11-26T20:23:22.279424435Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 26 20:23:22 no-preload-026579 crio[566]: time="2025-11-26T20:23:22.283307271Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 26 20:23:22 no-preload-026579 crio[566]: time="2025-11-26T20:23:22.283331882Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 26 20:23:22 no-preload-026579 crio[566]: time="2025-11-26T20:23:22.283352449Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 26 20:23:22 no-preload-026579 crio[566]: time="2025-11-26T20:23:22.286767996Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 26 20:23:22 no-preload-026579 crio[566]: time="2025-11-26T20:23:22.286787703Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 26 20:23:28 no-preload-026579 crio[566]: time="2025-11-26T20:23:28.518813434Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=fa01afeb-c15b-4c37-9eb6-30991b90ab9e name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:23:28 no-preload-026579 crio[566]: time="2025-11-26T20:23:28.519979116Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=1d6a264f-f491-4597-a954-b3d4b0525848 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:23:28 no-preload-026579 crio[566]: time="2025-11-26T20:23:28.521634068Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9crds/dashboard-metrics-scraper" id=eaf205ab-f787-46ad-aba9-dd4128b5aab3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:23:28 no-preload-026579 crio[566]: time="2025-11-26T20:23:28.522416351Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:23:28 no-preload-026579 crio[566]: time="2025-11-26T20:23:28.536752656Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:23:28 no-preload-026579 crio[566]: time="2025-11-26T20:23:28.537608948Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:23:28 no-preload-026579 crio[566]: time="2025-11-26T20:23:28.583417415Z" level=info msg="Created container d3aa9d6833a17ddfa7604b0ff6d901e9ecd66bef0a9f6abdb2dbe075384fe401: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9crds/dashboard-metrics-scraper" id=eaf205ab-f787-46ad-aba9-dd4128b5aab3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:23:28 no-preload-026579 crio[566]: time="2025-11-26T20:23:28.585143775Z" level=info msg="Starting container: d3aa9d6833a17ddfa7604b0ff6d901e9ecd66bef0a9f6abdb2dbe075384fe401" id=70c7049b-3d22-4496-be03-2e0c6189f699 name=/runtime.v1.RuntimeService/StartContainer
	Nov 26 20:23:28 no-preload-026579 crio[566]: time="2025-11-26T20:23:28.589656663Z" level=info msg="Started container" PID=1782 containerID=d3aa9d6833a17ddfa7604b0ff6d901e9ecd66bef0a9f6abdb2dbe075384fe401 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9crds/dashboard-metrics-scraper id=70c7049b-3d22-4496-be03-2e0c6189f699 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fa4cb50f83fdf272504a6f63af12a824b04062de7c1882d1db85de91c42a5637
	Nov 26 20:23:28 no-preload-026579 crio[566]: time="2025-11-26T20:23:28.687337437Z" level=info msg="Removing container: a949b6bc71056991eb7a8f848ac1159bfbd5a5f00cb1af85e7a397392f3606a0" id=5d55905d-a668-432d-9649-4fbb8b161cc6 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 26 20:23:28 no-preload-026579 crio[566]: time="2025-11-26T20:23:28.700225318Z" level=info msg="Removed container a949b6bc71056991eb7a8f848ac1159bfbd5a5f00cb1af85e7a397392f3606a0: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9crds/dashboard-metrics-scraper" id=5d55905d-a668-432d-9649-4fbb8b161cc6 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	d3aa9d6833a17       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           7 seconds ago       Exited              dashboard-metrics-scraper   3                   fa4cb50f83fdf       dashboard-metrics-scraper-6ffb444bf9-9crds   kubernetes-dashboard
	7f8751fb0a65f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           23 seconds ago      Running             storage-provisioner         1                   54c7878020476       storage-provisioner                          kube-system
	2a10859cf5d56       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   47 seconds ago      Running             kubernetes-dashboard        0                   bf7f86d8a6e14       kubernetes-dashboard-855c9754f9-vghzh        kubernetes-dashboard
	978ebe7980e56       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           54 seconds ago      Running             busybox                     1                   f7d098c06913e       busybox                                      default
	5e01aebce0762       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           54 seconds ago      Running             coredns                     0                   a5c8ef94f6ae4       coredns-66bc5c9577-wl4xp                     kube-system
	3bda7bcb07b58       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           54 seconds ago      Running             kube-proxy                  0                   541a87cf67ff6       kube-proxy-ktbwp                             kube-system
	32e35f17feb89       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           54 seconds ago      Exited              storage-provisioner         0                   54c7878020476       storage-provisioner                          kube-system
	42828f83720fe       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           54 seconds ago      Running             kindnet-cni                 0                   958a716237e0f       kindnet-8rfpj                                kube-system
	5a4a0e2af1862       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           57 seconds ago      Running             kube-apiserver              0                   a96f723a1547a       kube-apiserver-no-preload-026579             kube-system
	bbe6a4946c008       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           57 seconds ago      Running             kube-scheduler              0                   25ad52b01dfd7       kube-scheduler-no-preload-026579             kube-system
	d4451cac813aa       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           57 seconds ago      Running             kube-controller-manager     0                   014515d76839a       kube-controller-manager-no-preload-026579    kube-system
	590f69567c94c       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           57 seconds ago      Running             etcd                        0                   25d55a4236642       etcd-no-preload-026579                       kube-system
	
	
	==> coredns [5e01aebce076296d3ed66d96beac70b5dd1097a82f3df65932b28a11c416eb6f] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46632 - 43997 "HINFO IN 4146496324711485698.5438811130630472017. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.073809414s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-026579
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-026579
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970
	                    minikube.k8s.io/name=no-preload-026579
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_26T20_21_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 26 Nov 2025 20:21:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-026579
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 26 Nov 2025 20:23:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 26 Nov 2025 20:23:21 +0000   Wed, 26 Nov 2025 20:21:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 26 Nov 2025 20:23:21 +0000   Wed, 26 Nov 2025 20:21:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 26 Nov 2025 20:23:21 +0000   Wed, 26 Nov 2025 20:21:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 26 Nov 2025 20:23:21 +0000   Wed, 26 Nov 2025 20:22:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-026579
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                896379d3-12e9-47c2-b887-9f21dde83abe
	  Boot ID:                    4ab83601-3d95-4490-96bc-14b416ba2714
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-66bc5c9577-wl4xp                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     108s
	  kube-system                 etcd-no-preload-026579                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         114s
	  kube-system                 kindnet-8rfpj                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-no-preload-026579              250m (3%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-controller-manager-no-preload-026579     200m (2%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-proxy-ktbwp                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-no-preload-026579              100m (1%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-9crds    0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-vghzh         0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 107s                 kube-proxy       
	  Normal  Starting                 54s                  kube-proxy       
	  Normal  Starting                 118s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  118s (x8 over 118s)  kubelet          Node no-preload-026579 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    118s (x8 over 118s)  kubelet          Node no-preload-026579 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     118s (x8 over 118s)  kubelet          Node no-preload-026579 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    114s                 kubelet          Node no-preload-026579 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  114s                 kubelet          Node no-preload-026579 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     114s                 kubelet          Node no-preload-026579 status is now: NodeHasSufficientPID
	  Normal  Starting                 114s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           109s                 node-controller  Node no-preload-026579 event: Registered Node no-preload-026579 in Controller
	  Normal  NodeReady                95s                  kubelet          Node no-preload-026579 status is now: NodeReady
	  Normal  Starting                 58s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  58s (x8 over 58s)    kubelet          Node no-preload-026579 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s (x8 over 58s)    kubelet          Node no-preload-026579 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s (x8 over 58s)    kubelet          Node no-preload-026579 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           53s                  node-controller  Node no-preload-026579 event: Registered Node no-preload-026579 in Controller
	
	
	==> dmesg <==
	[  +0.091832] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023778] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.010162] kauditd_printk_skb: 47 callbacks suppressed
	[Nov26 19:37] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.011177] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.022896] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.024869] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.022915] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.023864] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +2.047793] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +4.032568] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +8.126198] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[ +16.382331] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[Nov26 19:38] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	
	
	==> etcd [590f69567c94cb9063adb6b5b5bfedd56c93afe7320fc6431800372b19411ff4] <==
	{"level":"warn","ts":"2025-11-26T20:22:39.904659Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:39.910332Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:39.916491Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:39.923126Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:39.929604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:39.936349Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:39.943067Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:39.950226Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:39.964493Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:39.972416Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:39.978512Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:39.985520Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:39.992770Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:39.999830Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:40.007796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:40.015679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:40.024554Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:40.033153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:40.042315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:40.057188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:40.065053Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:40.074364Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:40.136450Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:23:07.261409Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"125.143241ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356913385038398 > lease_revoke:<id:59069ac1d516e1ac>","response":"size:28"}
	{"level":"info","ts":"2025-11-26T20:23:07.609663Z","caller":"traceutil/trace.go:172","msg":"trace[1242984989] transaction","detail":"{read_only:false; response_revision:641; number_of_response:1; }","duration":"146.689979ms","start":"2025-11-26T20:23:07.462950Z","end":"2025-11-26T20:23:07.609640Z","steps":["trace[1242984989] 'process raft request'  (duration: 146.471159ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:23:36 up  1:06,  0 user,  load average: 3.22, 3.14, 2.12
	Linux no-preload-026579 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [42828f83720fe35dd743e7a5c065fe0193e00746358e315543e3a492e04cddc2] <==
	I1126 20:22:42.055393       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1126 20:22:42.055613       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1126 20:22:42.055753       1 main.go:148] setting mtu 1500 for CNI 
	I1126 20:22:42.055770       1 main.go:178] kindnetd IP family: "ipv4"
	I1126 20:22:42.055790       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-26T20:22:42Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1126 20:22:42.264824       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1126 20:22:42.264843       1 controller.go:381] "Waiting for informer caches to sync"
	I1126 20:22:42.264853       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1126 20:22:42.265012       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1126 20:23:12.264878       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1126 20:23:12.264955       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1126 20:23:12.265068       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1126 20:23:12.265076       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1126 20:23:13.764985       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1126 20:23:13.765026       1 metrics.go:72] Registering metrics
	I1126 20:23:13.765094       1 controller.go:711] "Syncing nftables rules"
	I1126 20:23:22.264601       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1126 20:23:22.264656       1 main.go:301] handling current node
	I1126 20:23:32.266367       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1126 20:23:32.266409       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5a4a0e2af1862b25492b690a7a862bef76306a1889835ec8f86fa09057dad6a9] <==
	I1126 20:22:40.647421       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1126 20:22:40.650837       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1126 20:22:40.654303       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1126 20:22:40.654333       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1126 20:22:40.654342       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1126 20:22:40.654355       1 aggregator.go:171] initial CRD sync complete...
	I1126 20:22:40.654363       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1126 20:22:40.654366       1 autoregister_controller.go:144] Starting autoregister controller
	I1126 20:22:40.654583       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1126 20:22:40.654596       1 cache.go:39] Caches are synced for autoregister controller
	I1126 20:22:40.654517       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1126 20:22:40.654408       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1126 20:22:40.654410       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1126 20:22:40.662245       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1126 20:22:40.912325       1 controller.go:667] quota admission added evaluator for: namespaces
	I1126 20:22:40.941907       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1126 20:22:40.975705       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1126 20:22:40.991133       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1126 20:22:40.997406       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1126 20:22:41.027780       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.115.129"}
	I1126 20:22:41.044410       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.81.142"}
	I1126 20:22:41.534527       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1126 20:22:43.981790       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1126 20:22:44.480972       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1126 20:22:44.531670       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [d4451cac813aa54ddc31652afec1d7e8194495b96f8e083719c0cc55f90393e6] <==
	I1126 20:22:43.984393       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1126 20:22:43.985267       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1126 20:22:43.987498       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1126 20:22:43.988664       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1126 20:22:43.989839       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1126 20:22:43.992159       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1126 20:22:43.992225       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1126 20:22:43.992164       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1126 20:22:43.992286       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-026579"
	I1126 20:22:43.992350       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1126 20:22:43.996231       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1126 20:22:43.998513       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1126 20:22:44.000712       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1126 20:22:44.003990       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1126 20:22:44.026930       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1126 20:22:44.027037       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1126 20:22:44.028250       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1126 20:22:44.028374       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1126 20:22:44.028473       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1126 20:22:44.028658       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1126 20:22:44.033895       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1126 20:22:44.033915       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1126 20:22:44.033926       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1126 20:22:44.034993       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1126 20:22:44.037718       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	
	
	==> kube-proxy [3bda7bcb07b58f6c63b7516471b9f44f6a342dfe7e086d6c5c44bb0df06149f9] <==
	I1126 20:22:41.921812       1 server_linux.go:53] "Using iptables proxy"
	I1126 20:22:41.975399       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1126 20:22:42.075553       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1126 20:22:42.075611       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1126 20:22:42.075735       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1126 20:22:42.092809       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1126 20:22:42.092857       1 server_linux.go:132] "Using iptables Proxier"
	I1126 20:22:42.097613       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1126 20:22:42.097977       1 server.go:527] "Version info" version="v1.34.1"
	I1126 20:22:42.097992       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 20:22:42.099200       1 config.go:106] "Starting endpoint slice config controller"
	I1126 20:22:42.099227       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1126 20:22:42.099253       1 config.go:403] "Starting serviceCIDR config controller"
	I1126 20:22:42.099268       1 config.go:200] "Starting service config controller"
	I1126 20:22:42.099272       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1126 20:22:42.099274       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1126 20:22:42.099515       1 config.go:309] "Starting node config controller"
	I1126 20:22:42.099533       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1126 20:22:42.099541       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1126 20:22:42.200284       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1126 20:22:42.200301       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1126 20:22:42.200303       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [bbe6a4946c0088614f8a872edc385ef4bf724c29b1ba7f173936d426a0fc26e3] <==
	I1126 20:22:39.546629       1 serving.go:386] Generated self-signed cert in-memory
	I1126 20:22:40.606299       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1126 20:22:40.606396       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 20:22:40.612982       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1126 20:22:40.613078       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1126 20:22:40.613128       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1126 20:22:40.613139       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1126 20:22:40.613179       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1126 20:22:40.613188       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1126 20:22:40.613426       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1126 20:22:40.613603       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1126 20:22:40.713888       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1126 20:22:40.713889       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1126 20:22:40.713934       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 26 20:22:47 no-preload-026579 kubelet[716]: I1126 20:22:47.456375     716 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 26 20:22:51 no-preload-026579 kubelet[716]: I1126 20:22:51.036691     716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-vghzh" podStartSLOduration=2.703342485 podStartE2EDuration="7.036667655s" podCreationTimestamp="2025-11-26 20:22:44 +0000 UTC" firstStartedPulling="2025-11-26 20:22:44.950632649 +0000 UTC m=+6.539870215" lastFinishedPulling="2025-11-26 20:22:49.283957803 +0000 UTC m=+10.873195385" observedRunningTime="2025-11-26 20:22:49.591448745 +0000 UTC m=+11.180686326" watchObservedRunningTime="2025-11-26 20:22:51.036667655 +0000 UTC m=+12.625905239"
	Nov 26 20:22:52 no-preload-026579 kubelet[716]: I1126 20:22:52.582167     716 scope.go:117] "RemoveContainer" containerID="df0e17da5612b2815f13652663efc8c2ee00be08301bd7430754174460494590"
	Nov 26 20:22:53 no-preload-026579 kubelet[716]: I1126 20:22:53.586545     716 scope.go:117] "RemoveContainer" containerID="df0e17da5612b2815f13652663efc8c2ee00be08301bd7430754174460494590"
	Nov 26 20:22:53 no-preload-026579 kubelet[716]: I1126 20:22:53.586939     716 scope.go:117] "RemoveContainer" containerID="82a1c326306eba14c3b3b80512d88cc8fa49036be32a702d1abc5af6da2a7d96"
	Nov 26 20:22:53 no-preload-026579 kubelet[716]: E1126 20:22:53.587112     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9crds_kubernetes-dashboard(d652bea3-c93c-4333-ba75-f7f035746dd2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9crds" podUID="d652bea3-c93c-4333-ba75-f7f035746dd2"
	Nov 26 20:22:54 no-preload-026579 kubelet[716]: I1126 20:22:54.590444     716 scope.go:117] "RemoveContainer" containerID="82a1c326306eba14c3b3b80512d88cc8fa49036be32a702d1abc5af6da2a7d96"
	Nov 26 20:22:54 no-preload-026579 kubelet[716]: E1126 20:22:54.590710     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9crds_kubernetes-dashboard(d652bea3-c93c-4333-ba75-f7f035746dd2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9crds" podUID="d652bea3-c93c-4333-ba75-f7f035746dd2"
	Nov 26 20:22:56 no-preload-026579 kubelet[716]: I1126 20:22:56.401442     716 scope.go:117] "RemoveContainer" containerID="82a1c326306eba14c3b3b80512d88cc8fa49036be32a702d1abc5af6da2a7d96"
	Nov 26 20:22:56 no-preload-026579 kubelet[716]: E1126 20:22:56.401719     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9crds_kubernetes-dashboard(d652bea3-c93c-4333-ba75-f7f035746dd2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9crds" podUID="d652bea3-c93c-4333-ba75-f7f035746dd2"
	Nov 26 20:23:07 no-preload-026579 kubelet[716]: I1126 20:23:07.516826     716 scope.go:117] "RemoveContainer" containerID="82a1c326306eba14c3b3b80512d88cc8fa49036be32a702d1abc5af6da2a7d96"
	Nov 26 20:23:08 no-preload-026579 kubelet[716]: I1126 20:23:08.629384     716 scope.go:117] "RemoveContainer" containerID="82a1c326306eba14c3b3b80512d88cc8fa49036be32a702d1abc5af6da2a7d96"
	Nov 26 20:23:08 no-preload-026579 kubelet[716]: I1126 20:23:08.629947     716 scope.go:117] "RemoveContainer" containerID="a949b6bc71056991eb7a8f848ac1159bfbd5a5f00cb1af85e7a397392f3606a0"
	Nov 26 20:23:08 no-preload-026579 kubelet[716]: E1126 20:23:08.630176     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9crds_kubernetes-dashboard(d652bea3-c93c-4333-ba75-f7f035746dd2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9crds" podUID="d652bea3-c93c-4333-ba75-f7f035746dd2"
	Nov 26 20:23:12 no-preload-026579 kubelet[716]: I1126 20:23:12.640862     716 scope.go:117] "RemoveContainer" containerID="32e35f17feb89664f1fae773b335343aa39c1e0842c3b1752b38b7722a5b602f"
	Nov 26 20:23:16 no-preload-026579 kubelet[716]: I1126 20:23:16.401828     716 scope.go:117] "RemoveContainer" containerID="a949b6bc71056991eb7a8f848ac1159bfbd5a5f00cb1af85e7a397392f3606a0"
	Nov 26 20:23:16 no-preload-026579 kubelet[716]: E1126 20:23:16.402072     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9crds_kubernetes-dashboard(d652bea3-c93c-4333-ba75-f7f035746dd2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9crds" podUID="d652bea3-c93c-4333-ba75-f7f035746dd2"
	Nov 26 20:23:28 no-preload-026579 kubelet[716]: I1126 20:23:28.517964     716 scope.go:117] "RemoveContainer" containerID="a949b6bc71056991eb7a8f848ac1159bfbd5a5f00cb1af85e7a397392f3606a0"
	Nov 26 20:23:28 no-preload-026579 kubelet[716]: I1126 20:23:28.685263     716 scope.go:117] "RemoveContainer" containerID="a949b6bc71056991eb7a8f848ac1159bfbd5a5f00cb1af85e7a397392f3606a0"
	Nov 26 20:23:28 no-preload-026579 kubelet[716]: I1126 20:23:28.685500     716 scope.go:117] "RemoveContainer" containerID="d3aa9d6833a17ddfa7604b0ff6d901e9ecd66bef0a9f6abdb2dbe075384fe401"
	Nov 26 20:23:28 no-preload-026579 kubelet[716]: E1126 20:23:28.685729     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9crds_kubernetes-dashboard(d652bea3-c93c-4333-ba75-f7f035746dd2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9crds" podUID="d652bea3-c93c-4333-ba75-f7f035746dd2"
	Nov 26 20:23:31 no-preload-026579 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 26 20:23:31 no-preload-026579 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 26 20:23:31 no-preload-026579 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 26 20:23:31 no-preload-026579 systemd[1]: kubelet.service: Consumed 1.595s CPU time.
	
	
	==> kubernetes-dashboard [2a10859cf5d562e2b2c673c950cfcf54f0d04d81fa0b79c317ba79e924252860] <==
	2025/11/26 20:22:49 Starting overwatch
	2025/11/26 20:22:49 Using namespace: kubernetes-dashboard
	2025/11/26 20:22:49 Using in-cluster config to connect to apiserver
	2025/11/26 20:22:49 Using secret token for csrf signing
	2025/11/26 20:22:49 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/26 20:22:49 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/26 20:22:49 Successful initial request to the apiserver, version: v1.34.1
	2025/11/26 20:22:49 Generating JWE encryption key
	2025/11/26 20:22:49 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/26 20:22:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/26 20:22:49 Initializing JWE encryption key from synchronized object
	2025/11/26 20:22:49 Creating in-cluster Sidecar client
	2025/11/26 20:22:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/26 20:22:49 Serving insecurely on HTTP port: 9090
	2025/11/26 20:23:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [32e35f17feb89664f1fae773b335343aa39c1e0842c3b1752b38b7722a5b602f] <==
	I1126 20:22:41.887783       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1126 20:23:11.889870       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [7f8751fb0a65f1680045ae79ecb0521ff69da6a9ef59fff50fa749d20fb9cf69] <==
	I1126 20:23:12.687863       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1126 20:23:12.696308       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1126 20:23:12.696349       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1126 20:23:12.698033       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:23:16.152729       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:23:20.413536       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:23:24.011713       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:23:27.065519       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:23:30.087940       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:23:30.098096       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1126 20:23:30.098269       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1126 20:23:30.098491       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-026579_e07b4716-0145-4ebe-8624-1e2a80bb0f4f!
	I1126 20:23:30.098403       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"001ce068-24a6-4540-989a-014660d8c6e6", APIVersion:"v1", ResourceVersion:"672", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-026579_e07b4716-0145-4ebe-8624-1e2a80bb0f4f became leader
	W1126 20:23:30.105287       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:23:30.109830       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1126 20:23:30.199511       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-026579_e07b4716-0145-4ebe-8624-1e2a80bb0f4f!
	W1126 20:23:32.113019       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:23:32.116894       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:23:34.120387       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:23:34.125505       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:23:36.129239       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:23:36.133510       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-026579 -n no-preload-026579
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-026579 -n no-preload-026579: exit status 2 (324.446712ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-026579 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (6.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (6.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-949294 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-949294 --alsologtostderr -v=1: exit status 80 (2.274642769s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-949294 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1126 20:23:36.433366  297762 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:23:36.433786  297762 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:23:36.433801  297762 out.go:374] Setting ErrFile to fd 2...
	I1126 20:23:36.433808  297762 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:23:36.434259  297762 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-10722/.minikube/bin
	I1126 20:23:36.434612  297762 out.go:368] Setting JSON to false
	I1126 20:23:36.434719  297762 mustload.go:66] Loading cluster: embed-certs-949294
	I1126 20:23:36.435371  297762 config.go:182] Loaded profile config "embed-certs-949294": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:23:36.435889  297762 cli_runner.go:164] Run: docker container inspect embed-certs-949294 --format={{.State.Status}}
	I1126 20:23:36.455433  297762 host.go:66] Checking if "embed-certs-949294" exists ...
	I1126 20:23:36.455700  297762 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:23:36.515272  297762 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:86 SystemTime:2025-11-26 20:23:36.505573719 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1126 20:23:36.515855  297762 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-949294 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1126 20:23:36.517803  297762 out.go:179] * Pausing node embed-certs-949294 ... 
	I1126 20:23:36.518723  297762 host.go:66] Checking if "embed-certs-949294" exists ...
	I1126 20:23:36.519028  297762 ssh_runner.go:195] Run: systemctl --version
	I1126 20:23:36.519080  297762 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-949294
	I1126 20:23:36.538228  297762 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/embed-certs-949294/id_rsa Username:docker}
	I1126 20:23:36.635556  297762 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:23:36.649012  297762 pause.go:52] kubelet running: true
	I1126 20:23:36.649072  297762 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1126 20:23:36.828032  297762 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1126 20:23:36.828224  297762 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1126 20:23:36.906596  297762 cri.go:89] found id: "ae31bf0fea5ac585f473d8f059b10f05dfe486cea4008bbd9381863883dfa02f"
	I1126 20:23:36.906615  297762 cri.go:89] found id: "cfe8d29a25b15896fe250c2367ea5bf40c17ce3f9aa972b6a3000afe2cea2ba4"
	I1126 20:23:36.906621  297762 cri.go:89] found id: "c8864fe9788731d2ce2001c0936defdb11739fcb35cd62e407e708947119748c"
	I1126 20:23:36.906626  297762 cri.go:89] found id: "be832c3cf0e3ebddcc1aa2777d57e2d38d2836d83f9be1d7f3aeba656fd95381"
	I1126 20:23:36.906631  297762 cri.go:89] found id: "365c78f7ca47179e6d1e36f011f96db8f3f25d8d01ed886707e3e02c4beb2040"
	I1126 20:23:36.906636  297762 cri.go:89] found id: "d0edb51e9e5fccff0dc7134b628a507303eb3f0ea693960b2ef07c819ccfcfb3"
	I1126 20:23:36.906641  297762 cri.go:89] found id: "1247796ab1281fb5ee5525e146ff9c03a80b5e36662073b88f7b4335b21630fa"
	I1126 20:23:36.906645  297762 cri.go:89] found id: "f265ea81a09610c82177779173f228ed110d405daaad945e9224abda5afc655e"
	I1126 20:23:36.906658  297762 cri.go:89] found id: "27f1868f6da521ee9bf27fc5eccba3b561b07052f526f759dfbebcc382b682e0"
	I1126 20:23:36.906677  297762 cri.go:89] found id: "b05705dc359bb9d69e6e46ca5d20f8b41a1d58b1b5bdb3bb6aad951eb6e902a2"
	I1126 20:23:36.906685  297762 cri.go:89] found id: "3a8d799bf870a2ddb08ba11d6e10375455a0f1bff23f82425087bf768c717932"
	I1126 20:23:36.906689  297762 cri.go:89] found id: ""
	I1126 20:23:36.906733  297762 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 20:23:36.919602  297762 retry.go:31] will retry after 262.502949ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:23:36Z" level=error msg="open /run/runc: no such file or directory"
	I1126 20:23:37.183116  297762 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:23:37.196400  297762 pause.go:52] kubelet running: false
	I1126 20:23:37.196447  297762 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1126 20:23:37.348069  297762 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1126 20:23:37.348140  297762 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1126 20:23:37.414040  297762 cri.go:89] found id: "ae31bf0fea5ac585f473d8f059b10f05dfe486cea4008bbd9381863883dfa02f"
	I1126 20:23:37.414060  297762 cri.go:89] found id: "cfe8d29a25b15896fe250c2367ea5bf40c17ce3f9aa972b6a3000afe2cea2ba4"
	I1126 20:23:37.414064  297762 cri.go:89] found id: "c8864fe9788731d2ce2001c0936defdb11739fcb35cd62e407e708947119748c"
	I1126 20:23:37.414067  297762 cri.go:89] found id: "be832c3cf0e3ebddcc1aa2777d57e2d38d2836d83f9be1d7f3aeba656fd95381"
	I1126 20:23:37.414070  297762 cri.go:89] found id: "365c78f7ca47179e6d1e36f011f96db8f3f25d8d01ed886707e3e02c4beb2040"
	I1126 20:23:37.414074  297762 cri.go:89] found id: "d0edb51e9e5fccff0dc7134b628a507303eb3f0ea693960b2ef07c819ccfcfb3"
	I1126 20:23:37.414076  297762 cri.go:89] found id: "1247796ab1281fb5ee5525e146ff9c03a80b5e36662073b88f7b4335b21630fa"
	I1126 20:23:37.414079  297762 cri.go:89] found id: "f265ea81a09610c82177779173f228ed110d405daaad945e9224abda5afc655e"
	I1126 20:23:37.414081  297762 cri.go:89] found id: "27f1868f6da521ee9bf27fc5eccba3b561b07052f526f759dfbebcc382b682e0"
	I1126 20:23:37.414088  297762 cri.go:89] found id: "b05705dc359bb9d69e6e46ca5d20f8b41a1d58b1b5bdb3bb6aad951eb6e902a2"
	I1126 20:23:37.414091  297762 cri.go:89] found id: "3a8d799bf870a2ddb08ba11d6e10375455a0f1bff23f82425087bf768c717932"
	I1126 20:23:37.414094  297762 cri.go:89] found id: ""
	I1126 20:23:37.414128  297762 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 20:23:37.426063  297762 retry.go:31] will retry after 285.135683ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:23:37Z" level=error msg="open /run/runc: no such file or directory"
	I1126 20:23:37.711530  297762 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:23:37.726131  297762 pause.go:52] kubelet running: false
	I1126 20:23:37.726202  297762 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1126 20:23:37.875516  297762 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1126 20:23:37.875609  297762 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1126 20:23:37.940821  297762 cri.go:89] found id: "ae31bf0fea5ac585f473d8f059b10f05dfe486cea4008bbd9381863883dfa02f"
	I1126 20:23:37.940839  297762 cri.go:89] found id: "cfe8d29a25b15896fe250c2367ea5bf40c17ce3f9aa972b6a3000afe2cea2ba4"
	I1126 20:23:37.940843  297762 cri.go:89] found id: "c8864fe9788731d2ce2001c0936defdb11739fcb35cd62e407e708947119748c"
	I1126 20:23:37.940847  297762 cri.go:89] found id: "be832c3cf0e3ebddcc1aa2777d57e2d38d2836d83f9be1d7f3aeba656fd95381"
	I1126 20:23:37.940850  297762 cri.go:89] found id: "365c78f7ca47179e6d1e36f011f96db8f3f25d8d01ed886707e3e02c4beb2040"
	I1126 20:23:37.940854  297762 cri.go:89] found id: "d0edb51e9e5fccff0dc7134b628a507303eb3f0ea693960b2ef07c819ccfcfb3"
	I1126 20:23:37.940856  297762 cri.go:89] found id: "1247796ab1281fb5ee5525e146ff9c03a80b5e36662073b88f7b4335b21630fa"
	I1126 20:23:37.940859  297762 cri.go:89] found id: "f265ea81a09610c82177779173f228ed110d405daaad945e9224abda5afc655e"
	I1126 20:23:37.940862  297762 cri.go:89] found id: "27f1868f6da521ee9bf27fc5eccba3b561b07052f526f759dfbebcc382b682e0"
	I1126 20:23:37.940876  297762 cri.go:89] found id: "b05705dc359bb9d69e6e46ca5d20f8b41a1d58b1b5bdb3bb6aad951eb6e902a2"
	I1126 20:23:37.940882  297762 cri.go:89] found id: "3a8d799bf870a2ddb08ba11d6e10375455a0f1bff23f82425087bf768c717932"
	I1126 20:23:37.940885  297762 cri.go:89] found id: ""
	I1126 20:23:37.940927  297762 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 20:23:37.952137  297762 retry.go:31] will retry after 449.117841ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:23:37Z" level=error msg="open /run/runc: no such file or directory"
	I1126 20:23:38.401742  297762 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:23:38.425920  297762 pause.go:52] kubelet running: false
	I1126 20:23:38.425982  297762 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1126 20:23:38.567481  297762 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1126 20:23:38.567566  297762 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1126 20:23:38.628544  297762 cri.go:89] found id: "ae31bf0fea5ac585f473d8f059b10f05dfe486cea4008bbd9381863883dfa02f"
	I1126 20:23:38.628567  297762 cri.go:89] found id: "cfe8d29a25b15896fe250c2367ea5bf40c17ce3f9aa972b6a3000afe2cea2ba4"
	I1126 20:23:38.628573  297762 cri.go:89] found id: "c8864fe9788731d2ce2001c0936defdb11739fcb35cd62e407e708947119748c"
	I1126 20:23:38.628578  297762 cri.go:89] found id: "be832c3cf0e3ebddcc1aa2777d57e2d38d2836d83f9be1d7f3aeba656fd95381"
	I1126 20:23:38.628582  297762 cri.go:89] found id: "365c78f7ca47179e6d1e36f011f96db8f3f25d8d01ed886707e3e02c4beb2040"
	I1126 20:23:38.628587  297762 cri.go:89] found id: "d0edb51e9e5fccff0dc7134b628a507303eb3f0ea693960b2ef07c819ccfcfb3"
	I1126 20:23:38.628591  297762 cri.go:89] found id: "1247796ab1281fb5ee5525e146ff9c03a80b5e36662073b88f7b4335b21630fa"
	I1126 20:23:38.628595  297762 cri.go:89] found id: "f265ea81a09610c82177779173f228ed110d405daaad945e9224abda5afc655e"
	I1126 20:23:38.628600  297762 cri.go:89] found id: "27f1868f6da521ee9bf27fc5eccba3b561b07052f526f759dfbebcc382b682e0"
	I1126 20:23:38.628608  297762 cri.go:89] found id: "b05705dc359bb9d69e6e46ca5d20f8b41a1d58b1b5bdb3bb6aad951eb6e902a2"
	I1126 20:23:38.628613  297762 cri.go:89] found id: "3a8d799bf870a2ddb08ba11d6e10375455a0f1bff23f82425087bf768c717932"
	I1126 20:23:38.628617  297762 cri.go:89] found id: ""
	I1126 20:23:38.628670  297762 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 20:23:38.642552  297762 out.go:203] 
	W1126 20:23:38.643559  297762 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:23:38Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:23:38Z" level=error msg="open /run/runc: no such file or directory"
	
	W1126 20:23:38.643573  297762 out.go:285] * 
	* 
	W1126 20:23:38.647574  297762 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1126 20:23:38.648551  297762 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-949294 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-949294
helpers_test.go:243: (dbg) docker inspect embed-certs-949294:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "86fea694f6d23eb28c927f14e9521ffcaeb05561b1a903e3154464f7a4ba4430",
	        "Created": "2025-11-26T20:21:31.21255744Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 281602,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-26T20:22:37.846330329Z",
	            "FinishedAt": "2025-11-26T20:22:36.906774097Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/86fea694f6d23eb28c927f14e9521ffcaeb05561b1a903e3154464f7a4ba4430/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/86fea694f6d23eb28c927f14e9521ffcaeb05561b1a903e3154464f7a4ba4430/hostname",
	        "HostsPath": "/var/lib/docker/containers/86fea694f6d23eb28c927f14e9521ffcaeb05561b1a903e3154464f7a4ba4430/hosts",
	        "LogPath": "/var/lib/docker/containers/86fea694f6d23eb28c927f14e9521ffcaeb05561b1a903e3154464f7a4ba4430/86fea694f6d23eb28c927f14e9521ffcaeb05561b1a903e3154464f7a4ba4430-json.log",
	        "Name": "/embed-certs-949294",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-949294:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-949294",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "86fea694f6d23eb28c927f14e9521ffcaeb05561b1a903e3154464f7a4ba4430",
	                "LowerDir": "/var/lib/docker/overlay2/17c74d8eb31c82401cf3f989170353a9316d246a288a8fc84a26c55d35f4bff9-init/diff:/var/lib/docker/overlay2/1661d775281cb7314a654558ee2c4e5880ffafb3a27a085032449cd60a68753a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/17c74d8eb31c82401cf3f989170353a9316d246a288a8fc84a26c55d35f4bff9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/17c74d8eb31c82401cf3f989170353a9316d246a288a8fc84a26c55d35f4bff9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/17c74d8eb31c82401cf3f989170353a9316d246a288a8fc84a26c55d35f4bff9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-949294",
	                "Source": "/var/lib/docker/volumes/embed-certs-949294/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-949294",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-949294",
	                "name.minikube.sigs.k8s.io": "embed-certs-949294",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "b3911b3063b901182b9c3ae78e80109ca672f538e5054538cc2eb8d96b6cf713",
	            "SandboxKey": "/var/run/docker/netns/b3911b3063b9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-949294": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7fd9c7914891185e47dacdba5bd1d1c0b9a651e39050d7a01ee422b067e5fad7",
	                    "EndpointID": "6b8b3b6e4b6f17f47bf591f2661234c50c44840efc981abc1c2c939f32fcca2c",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "66:60:2e:2e:ce:71",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-949294",
	                        "86fea694f6d2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-949294 -n embed-certs-949294
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-949294 -n embed-certs-949294: exit status 2 (322.839644ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-949294 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-949294 logs -n 25: (1.143833253s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ stop    │ -p no-preload-026579 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-026579            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ addons  │ enable metrics-server -p embed-certs-949294 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-949294           │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │                     │
	│ stop    │ -p embed-certs-949294 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-949294           │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ addons  │ enable dashboard -p no-preload-026579 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-026579            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ start   │ -p no-preload-026579 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-026579            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:23 UTC │
	│ addons  │ enable metrics-server -p newest-cni-297942 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-297942            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-949294 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-949294           │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ start   │ -p embed-certs-949294 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-949294           │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:23 UTC │
	│ stop    │ -p newest-cni-297942 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-297942            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ addons  │ enable dashboard -p newest-cni-297942 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-297942            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ start   │ -p newest-cni-297942 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-297942            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-178152 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-178152 │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │                     │
	│ image   │ newest-cni-297942 image list --format=json                                                                                                                                                                                                    │ newest-cni-297942            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ pause   │ -p newest-cni-297942 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-297942            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-178152 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-178152 │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:23 UTC │
	│ delete  │ -p newest-cni-297942                                                                                                                                                                                                                          │ newest-cni-297942            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:23 UTC │
	│ delete  │ -p newest-cni-297942                                                                                                                                                                                                                          │ newest-cni-297942            │ jenkins │ v1.37.0 │ 26 Nov 25 20:23 UTC │ 26 Nov 25 20:23 UTC │
	│ start   │ -p auto-825702 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-825702                  │ jenkins │ v1.37.0 │ 26 Nov 25 20:23 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-178152 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-178152 │ jenkins │ v1.37.0 │ 26 Nov 25 20:23 UTC │ 26 Nov 25 20:23 UTC │
	│ start   │ -p default-k8s-diff-port-178152 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-178152 │ jenkins │ v1.37.0 │ 26 Nov 25 20:23 UTC │                     │
	│ image   │ no-preload-026579 image list --format=json                                                                                                                                                                                                    │ no-preload-026579            │ jenkins │ v1.37.0 │ 26 Nov 25 20:23 UTC │ 26 Nov 25 20:23 UTC │
	│ pause   │ -p no-preload-026579 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-026579            │ jenkins │ v1.37.0 │ 26 Nov 25 20:23 UTC │                     │
	│ image   │ embed-certs-949294 image list --format=json                                                                                                                                                                                                   │ embed-certs-949294           │ jenkins │ v1.37.0 │ 26 Nov 25 20:23 UTC │ 26 Nov 25 20:23 UTC │
	│ pause   │ -p embed-certs-949294 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-949294           │ jenkins │ v1.37.0 │ 26 Nov 25 20:23 UTC │                     │
	│ delete  │ -p no-preload-026579                                                                                                                                                                                                                          │ no-preload-026579            │ jenkins │ v1.37.0 │ 26 Nov 25 20:23 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/26 20:23:11
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1126 20:23:11.440383  292013 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:23:11.440496  292013 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:23:11.440508  292013 out.go:374] Setting ErrFile to fd 2...
	I1126 20:23:11.440515  292013 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:23:11.440723  292013 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-10722/.minikube/bin
	I1126 20:23:11.441107  292013 out.go:368] Setting JSON to false
	I1126 20:23:11.442313  292013 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3941,"bootTime":1764184650,"procs":337,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1126 20:23:11.442360  292013 start.go:143] virtualization: kvm guest
	I1126 20:23:11.444216  292013 out.go:179] * [default-k8s-diff-port-178152] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1126 20:23:11.445318  292013 out.go:179]   - MINIKUBE_LOCATION=21974
	I1126 20:23:11.445306  292013 notify.go:221] Checking for updates...
	I1126 20:23:11.446480  292013 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1126 20:23:11.447697  292013 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21974-10722/kubeconfig
	I1126 20:23:11.448830  292013 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-10722/.minikube
	I1126 20:23:11.449874  292013 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1126 20:23:11.450880  292013 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1126 20:23:11.452236  292013 config.go:182] Loaded profile config "default-k8s-diff-port-178152": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:23:11.452747  292013 driver.go:422] Setting default libvirt URI to qemu:///system
	I1126 20:23:11.477152  292013 docker.go:124] docker version: linux-29.0.4:Docker Engine - Community
	I1126 20:23:11.477223  292013 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:23:11.530562  292013 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-26 20:23:11.521081776 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1126 20:23:11.530668  292013 docker.go:319] overlay module found
	I1126 20:23:11.532344  292013 out.go:179] * Using the docker driver based on existing profile
	I1126 20:23:11.533553  292013 start.go:309] selected driver: docker
	I1126 20:23:11.533572  292013 start.go:927] validating driver "docker" against &{Name:default-k8s-diff-port-178152 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-178152 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:23:11.533665  292013 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1126 20:23:11.534315  292013 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:23:11.590661  292013 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-26 20:23:11.581789918 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1126 20:23:11.590918  292013 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1126 20:23:11.590946  292013 cni.go:84] Creating CNI manager for ""
	I1126 20:23:11.590995  292013 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:23:11.591030  292013 start.go:353] cluster config:
	{Name:default-k8s-diff-port-178152 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-178152 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:23:11.592614  292013 out.go:179] * Starting "default-k8s-diff-port-178152" primary control-plane node in "default-k8s-diff-port-178152" cluster
	I1126 20:23:11.593974  292013 cache.go:134] Beginning downloading kic base image for docker with crio
	I1126 20:23:11.595037  292013 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1126 20:23:11.596046  292013 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:23:11.596075  292013 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21974-10722/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1126 20:23:11.596085  292013 cache.go:65] Caching tarball of preloaded images
	I1126 20:23:11.596139  292013 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1126 20:23:11.596167  292013 preload.go:238] Found /home/jenkins/minikube-integration/21974-10722/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1126 20:23:11.596174  292013 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1126 20:23:11.596261  292013 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/default-k8s-diff-port-178152/config.json ...
	I1126 20:23:11.615795  292013 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1126 20:23:11.615813  292013 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1126 20:23:11.615829  292013 cache.go:243] Successfully downloaded all kic artifacts
	I1126 20:23:11.615858  292013 start.go:360] acquireMachinesLock for default-k8s-diff-port-178152: {Name:mk205db4bd139b8853f3d786653274635beb61e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1126 20:23:11.615920  292013 start.go:364] duration metric: took 34.361µs to acquireMachinesLock for "default-k8s-diff-port-178152"
	I1126 20:23:11.615936  292013 start.go:96] Skipping create...Using existing machine configuration
	I1126 20:23:11.615941  292013 fix.go:54] fixHost starting: 
	I1126 20:23:11.616144  292013 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-178152 --format={{.State.Status}}
	I1126 20:23:11.633041  292013 fix.go:112] recreateIfNeeded on default-k8s-diff-port-178152: state=Stopped err=<nil>
	W1126 20:23:11.633069  292013 fix.go:138] unexpected machine state, will restart: <nil>
	W1126 20:23:08.695965  279050 pod_ready.go:104] pod "coredns-66bc5c9577-wl4xp" is not "Ready", error: <nil>
	W1126 20:23:11.193321  279050 pod_ready.go:104] pod "coredns-66bc5c9577-wl4xp" is not "Ready", error: <nil>
	W1126 20:23:09.709550  281230 pod_ready.go:104] pod "coredns-66bc5c9577-s8rrr" is not "Ready", error: <nil>
	W1126 20:23:11.709818  281230 pod_ready.go:104] pod "coredns-66bc5c9577-s8rrr" is not "Ready", error: <nil>
	I1126 20:23:08.134694  290654 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-825702 --name auto-825702 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-825702 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-825702 --network auto-825702 --ip 192.168.103.2 --volume auto-825702:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1126 20:23:08.459052  290654 cli_runner.go:164] Run: docker container inspect auto-825702 --format={{.State.Running}}
	I1126 20:23:08.476518  290654 cli_runner.go:164] Run: docker container inspect auto-825702 --format={{.State.Status}}
	I1126 20:23:08.493237  290654 cli_runner.go:164] Run: docker exec auto-825702 stat /var/lib/dpkg/alternatives/iptables
	I1126 20:23:08.540339  290654 oci.go:144] the created container "auto-825702" has a running status.
	I1126 20:23:08.540374  290654 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21974-10722/.minikube/machines/auto-825702/id_rsa...
	I1126 20:23:08.625248  290654 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21974-10722/.minikube/machines/auto-825702/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1126 20:23:08.653620  290654 cli_runner.go:164] Run: docker container inspect auto-825702 --format={{.State.Status}}
	I1126 20:23:08.671280  290654 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1126 20:23:08.671296  290654 kic_runner.go:114] Args: [docker exec --privileged auto-825702 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1126 20:23:08.732039  290654 cli_runner.go:164] Run: docker container inspect auto-825702 --format={{.State.Status}}
	I1126 20:23:08.755179  290654 machine.go:94] provisionDockerMachine start ...
	I1126 20:23:08.755285  290654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-825702
	I1126 20:23:08.780893  290654 main.go:143] libmachine: Using SSH client type: native
	I1126 20:23:08.781238  290654 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1126 20:23:08.781257  290654 main.go:143] libmachine: About to run SSH command:
	hostname
	I1126 20:23:08.782168  290654 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47660->127.0.0.1:33098: read: connection reset by peer
	I1126 20:23:11.933816  290654 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-825702
	
	I1126 20:23:11.933845  290654 ubuntu.go:182] provisioning hostname "auto-825702"
	I1126 20:23:11.933942  290654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-825702
	I1126 20:23:11.955152  290654 main.go:143] libmachine: Using SSH client type: native
	I1126 20:23:11.955427  290654 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1126 20:23:11.955445  290654 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-825702 && echo "auto-825702" | sudo tee /etc/hostname
	I1126 20:23:12.106616  290654 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-825702
	
	I1126 20:23:12.106688  290654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-825702
	I1126 20:23:12.126835  290654 main.go:143] libmachine: Using SSH client type: native
	I1126 20:23:12.127147  290654 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1126 20:23:12.127173  290654 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-825702' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-825702/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-825702' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1126 20:23:12.277739  290654 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1126 20:23:12.277766  290654 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21974-10722/.minikube CaCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21974-10722/.minikube}
	I1126 20:23:12.277789  290654 ubuntu.go:190] setting up certificates
	I1126 20:23:12.277804  290654 provision.go:84] configureAuth start
	I1126 20:23:12.277864  290654 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-825702
	I1126 20:23:12.295168  290654 provision.go:143] copyHostCerts
	I1126 20:23:12.295223  290654 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-10722/.minikube/ca.pem, removing ...
	I1126 20:23:12.295236  290654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-10722/.minikube/ca.pem
	I1126 20:23:12.295296  290654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/ca.pem (1078 bytes)
	I1126 20:23:12.295381  290654 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-10722/.minikube/cert.pem, removing ...
	I1126 20:23:12.295390  290654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-10722/.minikube/cert.pem
	I1126 20:23:12.295415  290654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/cert.pem (1123 bytes)
	I1126 20:23:12.295497  290654 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-10722/.minikube/key.pem, removing ...
	I1126 20:23:12.295506  290654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-10722/.minikube/key.pem
	I1126 20:23:12.295534  290654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/key.pem (1675 bytes)
	I1126 20:23:12.295591  290654 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem org=jenkins.auto-825702 san=[127.0.0.1 192.168.103.2 auto-825702 localhost minikube]
	I1126 20:23:12.321795  290654 provision.go:177] copyRemoteCerts
	I1126 20:23:12.321839  290654 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1126 20:23:12.321870  290654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-825702
	I1126 20:23:12.339200  290654 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/auto-825702/id_rsa Username:docker}
	I1126 20:23:12.437185  290654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1126 20:23:12.456201  290654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1126 20:23:12.472910  290654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1126 20:23:12.489243  290654 provision.go:87] duration metric: took 211.42653ms to configureAuth
	I1126 20:23:12.489265  290654 ubuntu.go:206] setting minikube options for container-runtime
	I1126 20:23:12.489416  290654 config.go:182] Loaded profile config "auto-825702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:23:12.489511  290654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-825702
	I1126 20:23:12.507582  290654 main.go:143] libmachine: Using SSH client type: native
	I1126 20:23:12.507780  290654 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1126 20:23:12.507796  290654 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1126 20:23:12.781449  290654 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1126 20:23:12.781491  290654 machine.go:97] duration metric: took 4.026285211s to provisionDockerMachine
	I1126 20:23:12.781503  290654 client.go:176] duration metric: took 9.486657251s to LocalClient.Create
	I1126 20:23:12.781520  290654 start.go:167] duration metric: took 9.48674154s to libmachine.API.Create "auto-825702"
	I1126 20:23:12.781527  290654 start.go:293] postStartSetup for "auto-825702" (driver="docker")
	I1126 20:23:12.781535  290654 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1126 20:23:12.781581  290654 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1126 20:23:12.781622  290654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-825702
	I1126 20:23:12.801338  290654 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/auto-825702/id_rsa Username:docker}
	I1126 20:23:12.900997  290654 ssh_runner.go:195] Run: cat /etc/os-release
	I1126 20:23:12.904439  290654 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1126 20:23:12.904478  290654 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1126 20:23:12.904490  290654 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-10722/.minikube/addons for local assets ...
	I1126 20:23:12.904539  290654 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-10722/.minikube/files for local assets ...
	I1126 20:23:12.904630  290654 filesync.go:149] local asset: /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem -> 142582.pem in /etc/ssl/certs
	I1126 20:23:12.904740  290654 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1126 20:23:12.912016  290654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem --> /etc/ssl/certs/142582.pem (1708 bytes)
	I1126 20:23:12.931277  290654 start.go:296] duration metric: took 149.73924ms for postStartSetup
	I1126 20:23:12.931620  290654 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-825702
	I1126 20:23:12.948897  290654 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/config.json ...
	I1126 20:23:12.949153  290654 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1126 20:23:12.949198  290654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-825702
	I1126 20:23:12.966056  290654 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/auto-825702/id_rsa Username:docker}
	I1126 20:23:13.061265  290654 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1126 20:23:13.065549  290654 start.go:128] duration metric: took 9.77265288s to createHost
	I1126 20:23:13.065569  290654 start.go:83] releasing machines lock for "auto-825702", held for 9.772807938s
	I1126 20:23:13.065624  290654 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-825702
	I1126 20:23:13.082987  290654 ssh_runner.go:195] Run: cat /version.json
	I1126 20:23:13.083045  290654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-825702
	I1126 20:23:13.083065  290654 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1126 20:23:13.083125  290654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-825702
	I1126 20:23:13.101098  290654 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/auto-825702/id_rsa Username:docker}
	I1126 20:23:13.101658  290654 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/auto-825702/id_rsa Username:docker}
	I1126 20:23:13.248108  290654 ssh_runner.go:195] Run: systemctl --version
	I1126 20:23:13.254244  290654 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1126 20:23:13.288072  290654 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1126 20:23:13.292438  290654 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1126 20:23:13.292520  290654 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1126 20:23:13.317258  290654 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1126 20:23:13.317277  290654 start.go:496] detecting cgroup driver to use...
	I1126 20:23:13.317301  290654 detect.go:190] detected "systemd" cgroup driver on host os
	I1126 20:23:13.317343  290654 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1126 20:23:13.332701  290654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1126 20:23:13.343996  290654 docker.go:218] disabling cri-docker service (if available) ...
	I1126 20:23:13.344063  290654 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1126 20:23:13.359920  290654 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1126 20:23:13.376200  290654 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1126 20:23:13.458202  290654 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1126 20:23:13.545058  290654 docker.go:234] disabling docker service ...
	I1126 20:23:13.545125  290654 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1126 20:23:13.563618  290654 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1126 20:23:13.575589  290654 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1126 20:23:13.659232  290654 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1126 20:23:13.741598  290654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1126 20:23:13.753230  290654 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1126 20:23:13.766347  290654 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1126 20:23:13.766400  290654 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:23:13.776320  290654 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1126 20:23:13.776363  290654 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:23:13.785041  290654 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:23:13.792995  290654 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:23:13.801178  290654 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1126 20:23:13.808838  290654 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:23:13.817198  290654 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:23:13.829677  290654 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:23:13.837756  290654 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1126 20:23:13.844718  290654 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1126 20:23:13.851623  290654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:23:13.929048  290654 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1126 20:23:14.058401  290654 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1126 20:23:14.058487  290654 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1126 20:23:14.062290  290654 start.go:564] Will wait 60s for crictl version
	I1126 20:23:14.062353  290654 ssh_runner.go:195] Run: which crictl
	I1126 20:23:14.065660  290654 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1126 20:23:14.091120  290654 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1126 20:23:14.091210  290654 ssh_runner.go:195] Run: crio --version
	I1126 20:23:14.117155  290654 ssh_runner.go:195] Run: crio --version
	I1126 20:23:14.145211  290654 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1126 20:23:14.146347  290654 cli_runner.go:164] Run: docker network inspect auto-825702 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1126 20:23:14.163312  290654 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1126 20:23:14.167143  290654 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:23:14.176842  290654 kubeadm.go:884] updating cluster {Name:auto-825702 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-825702 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1126 20:23:14.176954  290654 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:23:14.177008  290654 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:23:14.209406  290654 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:23:14.209426  290654 crio.go:433] Images already preloaded, skipping extraction
	I1126 20:23:14.209480  290654 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:23:14.233034  290654 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:23:14.233054  290654 cache_images.go:86] Images are preloaded, skipping loading
	I1126 20:23:14.233064  290654 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1126 20:23:14.233167  290654 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-825702 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-825702 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1126 20:23:14.233234  290654 ssh_runner.go:195] Run: crio config
	I1126 20:23:14.277192  290654 cni.go:84] Creating CNI manager for ""
	I1126 20:23:14.277214  290654 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:23:14.277232  290654 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1126 20:23:14.277262  290654 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-825702 NodeName:auto-825702 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1126 20:23:14.277404  290654 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-825702"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1126 20:23:14.277482  290654 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1126 20:23:14.285340  290654 binaries.go:51] Found k8s binaries, skipping transfer
	I1126 20:23:14.285386  290654 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1126 20:23:14.292836  290654 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1126 20:23:14.304841  290654 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1126 20:23:14.319148  290654 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1126 20:23:14.330598  290654 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1126 20:23:14.333692  290654 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:23:14.342648  290654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:23:14.418860  290654 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:23:14.441407  290654 certs.go:69] Setting up /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702 for IP: 192.168.103.2
	I1126 20:23:14.441425  290654 certs.go:195] generating shared ca certs ...
	I1126 20:23:14.441445  290654 certs.go:227] acquiring lock for ca certs: {Name:mk4795cc985d88cb969cdd6a9c35d3c72f02dfea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:23:14.441599  290654 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21974-10722/.minikube/ca.key
	I1126 20:23:14.441660  290654 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.key
	I1126 20:23:14.441675  290654 certs.go:257] generating profile certs ...
	I1126 20:23:14.441739  290654 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/client.key
	I1126 20:23:14.441756  290654 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/client.crt with IP's: []
	I1126 20:23:14.561248  290654 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/client.crt ...
	I1126 20:23:14.561273  290654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/client.crt: {Name:mka78bb7cd65f448b3a66a8ed3242d744cbd3ef3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:23:14.561443  290654 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/client.key ...
	I1126 20:23:14.561471  290654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/client.key: {Name:mk7e6b179f66f415078976ea7604686ca387360b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:23:14.561580  290654 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/apiserver.key.a3b7f4ac
	I1126 20:23:14.561598  290654 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/apiserver.crt.a3b7f4ac with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1126 20:23:14.653268  290654 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/apiserver.crt.a3b7f4ac ...
	I1126 20:23:14.653291  290654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/apiserver.crt.a3b7f4ac: {Name:mk19073e3da57c61475b1d8ab67fc8245bda1990 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:23:14.653426  290654 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/apiserver.key.a3b7f4ac ...
	I1126 20:23:14.653442  290654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/apiserver.key.a3b7f4ac: {Name:mk442d2e6204a99840a9704e9c26d0fbee8bfeb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:23:14.653547  290654 certs.go:382] copying /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/apiserver.crt.a3b7f4ac -> /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/apiserver.crt
	I1126 20:23:14.653646  290654 certs.go:386] copying /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/apiserver.key.a3b7f4ac -> /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/apiserver.key
	I1126 20:23:14.653728  290654 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/proxy-client.key
	I1126 20:23:14.653748  290654 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/proxy-client.crt with IP's: []
	I1126 20:23:14.813410  290654 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/proxy-client.crt ...
	I1126 20:23:14.813435  290654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/proxy-client.crt: {Name:mkde4786d8d21ddb4efdf9613c2ade685abc5c74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:23:14.813610  290654 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/proxy-client.key ...
	I1126 20:23:14.813627  290654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/proxy-client.key: {Name:mkd76ddd51996d4102db39f9558a24d218af9bc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:23:14.813815  290654 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/14258.pem (1338 bytes)
	W1126 20:23:14.813862  290654 certs.go:480] ignoring /home/jenkins/minikube-integration/21974-10722/.minikube/certs/14258_empty.pem, impossibly tiny 0 bytes
	I1126 20:23:14.813874  290654 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem (1679 bytes)
	I1126 20:23:14.813912  290654 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem (1078 bytes)
	I1126 20:23:14.813952  290654 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem (1123 bytes)
	I1126 20:23:14.814033  290654 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem (1675 bytes)
	I1126 20:23:14.814101  290654 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem (1708 bytes)
	I1126 20:23:14.814651  290654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1126 20:23:14.832871  290654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1126 20:23:14.849984  290654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1126 20:23:14.866732  290654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1126 20:23:14.882934  290654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1126 20:23:14.899525  290654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1126 20:23:14.915794  290654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1126 20:23:14.931980  290654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/auto-825702/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1126 20:23:14.947744  290654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/certs/14258.pem --> /usr/share/ca-certificates/14258.pem (1338 bytes)
	I1126 20:23:14.965250  290654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem --> /usr/share/ca-certificates/142582.pem (1708 bytes)
	I1126 20:23:14.981169  290654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1126 20:23:14.997436  290654 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1126 20:23:15.009230  290654 ssh_runner.go:195] Run: openssl version
	I1126 20:23:15.015235  290654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14258.pem && ln -fs /usr/share/ca-certificates/14258.pem /etc/ssl/certs/14258.pem"
	I1126 20:23:15.022951  290654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14258.pem
	I1126 20:23:15.026235  290654 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 26 19:41 /usr/share/ca-certificates/14258.pem
	I1126 20:23:15.026277  290654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14258.pem
	I1126 20:23:15.060428  290654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14258.pem /etc/ssl/certs/51391683.0"
	I1126 20:23:15.068547  290654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142582.pem && ln -fs /usr/share/ca-certificates/142582.pem /etc/ssl/certs/142582.pem"
	I1126 20:23:15.076572  290654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142582.pem
	I1126 20:23:15.080134  290654 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 26 19:41 /usr/share/ca-certificates/142582.pem
	I1126 20:23:15.080168  290654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142582.pem
	I1126 20:23:15.113444  290654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142582.pem /etc/ssl/certs/3ec20f2e.0"
	I1126 20:23:15.121470  290654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1126 20:23:15.129887  290654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:23:15.133669  290654 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 26 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:23:15.133717  290654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:23:15.170371  290654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1126 20:23:15.178639  290654 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1126 20:23:15.182230  290654 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1126 20:23:15.182288  290654 kubeadm.go:401] StartCluster: {Name:auto-825702 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-825702 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetC
lientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:23:15.182372  290654 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 20:23:15.182417  290654 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 20:23:15.213578  290654 cri.go:89] found id: ""
	I1126 20:23:15.213641  290654 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1126 20:23:15.221802  290654 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1126 20:23:15.229340  290654 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1126 20:23:15.229390  290654 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1126 20:23:15.236999  290654 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1126 20:23:15.237013  290654 kubeadm.go:158] found existing configuration files:
	
	I1126 20:23:15.237046  290654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1126 20:23:15.243974  290654 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1126 20:23:15.244013  290654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1126 20:23:15.250710  290654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1126 20:23:15.257608  290654 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1126 20:23:15.257644  290654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1126 20:23:15.264135  290654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1126 20:23:15.270882  290654 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1126 20:23:15.270929  290654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1126 20:23:15.277491  290654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1126 20:23:15.284739  290654 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1126 20:23:15.284784  290654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1126 20:23:15.292713  290654 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1126 20:23:15.331393  290654 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1126 20:23:15.331452  290654 kubeadm.go:319] [preflight] Running pre-flight checks
	I1126 20:23:15.349760  290654 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1126 20:23:15.349861  290654 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1126 20:23:15.349935  290654 kubeadm.go:319] OS: Linux
	I1126 20:23:15.350004  290654 kubeadm.go:319] CGROUPS_CPU: enabled
	I1126 20:23:15.350083  290654 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1126 20:23:15.350164  290654 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1126 20:23:15.350237  290654 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1126 20:23:15.350299  290654 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1126 20:23:15.350384  290654 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1126 20:23:15.350446  290654 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1126 20:23:15.350520  290654 kubeadm.go:319] CGROUPS_IO: enabled
	I1126 20:23:15.411648  290654 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1126 20:23:15.411792  290654 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1126 20:23:15.411920  290654 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1126 20:23:15.418763  290654 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1126 20:23:11.634593  292013 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-178152" ...
	I1126 20:23:11.634649  292013 cli_runner.go:164] Run: docker start default-k8s-diff-port-178152
	I1126 20:23:11.926041  292013 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-178152 --format={{.State.Status}}
	I1126 20:23:11.945873  292013 kic.go:430] container "default-k8s-diff-port-178152" state is running.
	I1126 20:23:11.946183  292013 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-178152
	I1126 20:23:11.965407  292013 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/default-k8s-diff-port-178152/config.json ...
	I1126 20:23:11.965672  292013 machine.go:94] provisionDockerMachine start ...
	I1126 20:23:11.965754  292013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-178152
	I1126 20:23:11.984253  292013 main.go:143] libmachine: Using SSH client type: native
	I1126 20:23:11.984606  292013 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1126 20:23:11.984627  292013 main.go:143] libmachine: About to run SSH command:
	hostname
	I1126 20:23:11.985310  292013 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53020->127.0.0.1:33103: read: connection reset by peer
	I1126 20:23:15.122812  292013 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-178152
	
	I1126 20:23:15.122840  292013 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-178152"
	I1126 20:23:15.122905  292013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-178152
	I1126 20:23:15.141545  292013 main.go:143] libmachine: Using SSH client type: native
	I1126 20:23:15.141743  292013 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1126 20:23:15.141756  292013 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-178152 && echo "default-k8s-diff-port-178152" | sudo tee /etc/hostname
	I1126 20:23:15.288999  292013 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-178152
	
	I1126 20:23:15.289074  292013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-178152
	I1126 20:23:15.307961  292013 main.go:143] libmachine: Using SSH client type: native
	I1126 20:23:15.308207  292013 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1126 20:23:15.308232  292013 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-178152' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-178152/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-178152' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1126 20:23:15.447684  292013 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1126 20:23:15.447708  292013 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21974-10722/.minikube CaCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21974-10722/.minikube}
	I1126 20:23:15.447742  292013 ubuntu.go:190] setting up certificates
	I1126 20:23:15.447753  292013 provision.go:84] configureAuth start
	I1126 20:23:15.447805  292013 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-178152
	I1126 20:23:15.466227  292013 provision.go:143] copyHostCerts
	I1126 20:23:15.466276  292013 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-10722/.minikube/ca.pem, removing ...
	I1126 20:23:15.466286  292013 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-10722/.minikube/ca.pem
	I1126 20:23:15.466350  292013 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/ca.pem (1078 bytes)
	I1126 20:23:15.466445  292013 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-10722/.minikube/cert.pem, removing ...
	I1126 20:23:15.466454  292013 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-10722/.minikube/cert.pem
	I1126 20:23:15.466520  292013 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/cert.pem (1123 bytes)
	I1126 20:23:15.466598  292013 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-10722/.minikube/key.pem, removing ...
	I1126 20:23:15.466607  292013 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-10722/.minikube/key.pem
	I1126 20:23:15.466632  292013 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/key.pem (1675 bytes)
	I1126 20:23:15.466694  292013 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-178152 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-178152 localhost minikube]
	I1126 20:23:15.723525  292013 provision.go:177] copyRemoteCerts
	I1126 20:23:15.723583  292013 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1126 20:23:15.723615  292013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-178152
	I1126 20:23:15.741675  292013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/default-k8s-diff-port-178152/id_rsa Username:docker}
	I1126 20:23:15.840142  292013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1126 20:23:15.856793  292013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1126 20:23:15.872789  292013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1126 20:23:15.889502  292013 provision.go:87] duration metric: took 441.73745ms to configureAuth
	I1126 20:23:15.889527  292013 ubuntu.go:206] setting minikube options for container-runtime
	I1126 20:23:15.889739  292013 config.go:182] Loaded profile config "default-k8s-diff-port-178152": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:23:15.889861  292013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-178152
	I1126 20:23:15.909189  292013 main.go:143] libmachine: Using SSH client type: native
	I1126 20:23:15.909493  292013 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1126 20:23:15.909522  292013 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1126 20:23:16.239537  292013 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1126 20:23:16.239562  292013 machine.go:97] duration metric: took 4.273873255s to provisionDockerMachine
	I1126 20:23:16.239577  292013 start.go:293] postStartSetup for "default-k8s-diff-port-178152" (driver="docker")
	I1126 20:23:16.239591  292013 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1126 20:23:16.239682  292013 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1126 20:23:16.239737  292013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-178152
	I1126 20:23:16.260385  292013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/default-k8s-diff-port-178152/id_rsa Username:docker}
	I1126 20:23:16.358126  292013 ssh_runner.go:195] Run: cat /etc/os-release
	I1126 20:23:16.361405  292013 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1126 20:23:16.361440  292013 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1126 20:23:16.361451  292013 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-10722/.minikube/addons for local assets ...
	I1126 20:23:16.361509  292013 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-10722/.minikube/files for local assets ...
	I1126 20:23:16.361599  292013 filesync.go:149] local asset: /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem -> 142582.pem in /etc/ssl/certs
	I1126 20:23:16.361707  292013 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1126 20:23:16.369023  292013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem --> /etc/ssl/certs/142582.pem (1708 bytes)
	I1126 20:23:16.385925  292013 start.go:296] duration metric: took 146.337148ms for postStartSetup
	I1126 20:23:16.385989  292013 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1126 20:23:16.386031  292013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-178152
	I1126 20:23:16.405445  292013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/default-k8s-diff-port-178152/id_rsa Username:docker}
	W1126 20:23:13.193401  279050 pod_ready.go:104] pod "coredns-66bc5c9577-wl4xp" is not "Ready", error: <nil>
	W1126 20:23:15.194538  279050 pod_ready.go:104] pod "coredns-66bc5c9577-wl4xp" is not "Ready", error: <nil>
	I1126 20:23:16.502288  292013 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1126 20:23:16.506679  292013 fix.go:56] duration metric: took 4.890731938s for fixHost
	I1126 20:23:16.506702  292013 start.go:83] releasing machines lock for "default-k8s-diff-port-178152", held for 4.890770543s
	I1126 20:23:16.506772  292013 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-178152
	I1126 20:23:16.524986  292013 ssh_runner.go:195] Run: cat /version.json
	I1126 20:23:16.525024  292013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-178152
	I1126 20:23:16.525076  292013 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1126 20:23:16.525147  292013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-178152
	I1126 20:23:16.543349  292013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/default-k8s-diff-port-178152/id_rsa Username:docker}
	I1126 20:23:16.544787  292013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/default-k8s-diff-port-178152/id_rsa Username:docker}
	I1126 20:23:16.710155  292013 ssh_runner.go:195] Run: systemctl --version
	I1126 20:23:16.717137  292013 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1126 20:23:16.751512  292013 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1126 20:23:16.755985  292013 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1126 20:23:16.756075  292013 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1126 20:23:16.763515  292013 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1126 20:23:16.763533  292013 start.go:496] detecting cgroup driver to use...
	I1126 20:23:16.763556  292013 detect.go:190] detected "systemd" cgroup driver on host os
	I1126 20:23:16.763596  292013 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1126 20:23:16.777637  292013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1126 20:23:16.789084  292013 docker.go:218] disabling cri-docker service (if available) ...
	I1126 20:23:16.789130  292013 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1126 20:23:16.802415  292013 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1126 20:23:16.814305  292013 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1126 20:23:16.894876  292013 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1126 20:23:16.973549  292013 docker.go:234] disabling docker service ...
	I1126 20:23:16.973602  292013 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1126 20:23:16.987105  292013 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1126 20:23:16.998823  292013 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1126 20:23:17.079192  292013 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1126 20:23:17.154663  292013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1126 20:23:17.166248  292013 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1126 20:23:17.179608  292013 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1126 20:23:17.179659  292013 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:23:17.187979  292013 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1126 20:23:17.188022  292013 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:23:17.197441  292013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:23:17.205620  292013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:23:17.214614  292013 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1126 20:23:17.222358  292013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:23:17.230646  292013 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:23:17.238512  292013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:23:17.246532  292013 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1126 20:23:17.253262  292013 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1126 20:23:17.260346  292013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:23:17.336611  292013 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1126 20:23:17.482298  292013 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1126 20:23:17.482365  292013 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1126 20:23:17.487204  292013 start.go:564] Will wait 60s for crictl version
	I1126 20:23:17.487266  292013 ssh_runner.go:195] Run: which crictl
	I1126 20:23:17.490714  292013 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1126 20:23:17.516962  292013 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1126 20:23:17.517029  292013 ssh_runner.go:195] Run: crio --version
	I1126 20:23:17.546625  292013 ssh_runner.go:195] Run: crio --version
	I1126 20:23:17.576514  292013 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	W1126 20:23:14.209234  281230 pod_ready.go:104] pod "coredns-66bc5c9577-s8rrr" is not "Ready", error: <nil>
	W1126 20:23:16.209528  281230 pod_ready.go:104] pod "coredns-66bc5c9577-s8rrr" is not "Ready", error: <nil>
	I1126 20:23:15.421393  290654 out.go:252]   - Generating certificates and keys ...
	I1126 20:23:15.421469  290654 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1126 20:23:15.421584  290654 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1126 20:23:15.901705  290654 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1126 20:23:16.198158  290654 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1126 20:23:16.755333  290654 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1126 20:23:16.910521  290654 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1126 20:23:17.293843  290654 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1126 20:23:17.294078  290654 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-825702 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1126 20:23:18.053504  290654 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1126 20:23:18.053707  290654 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-825702 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1126 20:23:17.577646  292013 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-178152 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1126 20:23:17.596268  292013 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1126 20:23:17.600505  292013 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:23:17.610494  292013 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-178152 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-178152 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1126 20:23:17.610599  292013 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:23:17.610638  292013 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:23:17.642078  292013 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:23:17.642098  292013 crio.go:433] Images already preloaded, skipping extraction
	I1126 20:23:17.642144  292013 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:23:17.668002  292013 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:23:17.668024  292013 cache_images.go:86] Images are preloaded, skipping loading
	I1126 20:23:17.668033  292013 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1126 20:23:17.668159  292013 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-178152 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-178152 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1126 20:23:17.668231  292013 ssh_runner.go:195] Run: crio config
	I1126 20:23:17.728730  292013 cni.go:84] Creating CNI manager for ""
	I1126 20:23:17.728745  292013 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:23:17.728757  292013 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1126 20:23:17.728780  292013 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-178152 NodeName:default-k8s-diff-port-178152 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1126 20:23:17.728904  292013 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-178152"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1126 20:23:17.728961  292013 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1126 20:23:17.737340  292013 binaries.go:51] Found k8s binaries, skipping transfer
	I1126 20:23:17.737397  292013 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1126 20:23:17.744823  292013 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1126 20:23:17.757195  292013 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1126 20:23:17.769202  292013 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1126 20:23:17.782349  292013 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1126 20:23:17.786032  292013 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:23:17.795101  292013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:23:17.873013  292013 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:23:17.897757  292013 certs.go:69] Setting up /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/default-k8s-diff-port-178152 for IP: 192.168.85.2
	I1126 20:23:17.897775  292013 certs.go:195] generating shared ca certs ...
	I1126 20:23:17.897795  292013 certs.go:227] acquiring lock for ca certs: {Name:mk4795cc985d88cb969cdd6a9c35d3c72f02dfea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:23:17.897932  292013 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21974-10722/.minikube/ca.key
	I1126 20:23:17.897986  292013 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.key
	I1126 20:23:17.898001  292013 certs.go:257] generating profile certs ...
	I1126 20:23:17.898093  292013 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/default-k8s-diff-port-178152/client.key
	I1126 20:23:17.898162  292013 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/default-k8s-diff-port-178152/apiserver.key.e0e0c015
	I1126 20:23:17.898218  292013 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/default-k8s-diff-port-178152/proxy-client.key
	I1126 20:23:17.898357  292013 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/14258.pem (1338 bytes)
	W1126 20:23:17.898403  292013 certs.go:480] ignoring /home/jenkins/minikube-integration/21974-10722/.minikube/certs/14258_empty.pem, impossibly tiny 0 bytes
	I1126 20:23:17.898418  292013 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem (1679 bytes)
	I1126 20:23:17.898486  292013 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem (1078 bytes)
	I1126 20:23:17.898527  292013 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem (1123 bytes)
	I1126 20:23:17.898563  292013 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem (1675 bytes)
	I1126 20:23:17.898625  292013 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem (1708 bytes)
	I1126 20:23:17.899165  292013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1126 20:23:17.918784  292013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1126 20:23:17.937235  292013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1126 20:23:17.955598  292013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1126 20:23:17.978718  292013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/default-k8s-diff-port-178152/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1126 20:23:17.998328  292013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/default-k8s-diff-port-178152/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1126 20:23:18.014824  292013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/default-k8s-diff-port-178152/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1126 20:23:18.030942  292013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/default-k8s-diff-port-178152/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1126 20:23:18.047085  292013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem --> /usr/share/ca-certificates/142582.pem (1708 bytes)
	I1126 20:23:18.063322  292013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1126 20:23:18.079509  292013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/certs/14258.pem --> /usr/share/ca-certificates/14258.pem (1338 bytes)
	I1126 20:23:18.098732  292013 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1126 20:23:18.110426  292013 ssh_runner.go:195] Run: openssl version
	I1126 20:23:18.116110  292013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142582.pem && ln -fs /usr/share/ca-certificates/142582.pem /etc/ssl/certs/142582.pem"
	I1126 20:23:18.124052  292013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142582.pem
	I1126 20:23:18.127654  292013 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 26 19:41 /usr/share/ca-certificates/142582.pem
	I1126 20:23:18.127698  292013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142582.pem
	I1126 20:23:18.162629  292013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142582.pem /etc/ssl/certs/3ec20f2e.0"
	I1126 20:23:18.170348  292013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1126 20:23:18.178740  292013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:23:18.182764  292013 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 26 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:23:18.182806  292013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:23:18.234882  292013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1126 20:23:18.245881  292013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14258.pem && ln -fs /usr/share/ca-certificates/14258.pem /etc/ssl/certs/14258.pem"
	I1126 20:23:18.255606  292013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14258.pem
	I1126 20:23:18.259552  292013 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 26 19:41 /usr/share/ca-certificates/14258.pem
	I1126 20:23:18.259605  292013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14258.pem
	I1126 20:23:18.303253  292013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14258.pem /etc/ssl/certs/51391683.0"
	I1126 20:23:18.312096  292013 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1126 20:23:18.316008  292013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1126 20:23:18.350634  292013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1126 20:23:18.384786  292013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1126 20:23:18.431599  292013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1126 20:23:18.475753  292013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1126 20:23:18.526391  292013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1126 20:23:18.588346  292013 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-178152 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-178152 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:23:18.588449  292013 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 20:23:18.588565  292013 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 20:23:18.623451  292013 cri.go:89] found id: "851ab28993a8b771e5c0a2f66a99e6f22fbce78ae80259d843eb935540e459cb"
	I1126 20:23:18.623493  292013 cri.go:89] found id: "cd9d1e44673562de24369667807437c6a3c635356a7d55a049742bc674b8d8bb"
	I1126 20:23:18.623506  292013 cri.go:89] found id: "45aa87e14b73bdfe289340af54adbd31ea4129fb3bbd2a635ed0f95587efea73"
	I1126 20:23:18.623512  292013 cri.go:89] found id: "53d64e031f1a8bdf5d5b01443fbdf38f958b0ba7ea8c6514663f714eb5cedebf"
	I1126 20:23:18.623516  292013 cri.go:89] found id: ""
	I1126 20:23:18.623557  292013 ssh_runner.go:195] Run: sudo runc list -f json
	W1126 20:23:18.638001  292013 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:23:18Z" level=error msg="open /run/runc: no such file or directory"
	I1126 20:23:18.638079  292013 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1126 20:23:18.647325  292013 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1126 20:23:18.647339  292013 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1126 20:23:18.647376  292013 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1126 20:23:18.655974  292013 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1126 20:23:18.657075  292013 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-178152" does not appear in /home/jenkins/minikube-integration/21974-10722/kubeconfig
	I1126 20:23:18.657847  292013 kubeconfig.go:62] /home/jenkins/minikube-integration/21974-10722/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-178152" cluster setting kubeconfig missing "default-k8s-diff-port-178152" context setting]
	I1126 20:23:18.658988  292013 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/kubeconfig: {Name:mk6fc897a3190afd853d8f239c80394df10dbd39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:23:18.661117  292013 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1126 20:23:18.670104  292013 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1126 20:23:18.670133  292013 kubeadm.go:602] duration metric: took 22.788009ms to restartPrimaryControlPlane
	I1126 20:23:18.670142  292013 kubeadm.go:403] duration metric: took 81.823346ms to StartCluster
	I1126 20:23:18.670155  292013 settings.go:142] acquiring lock: {Name:mkeab3f1dbcfb88a16fda32bff13c520a9a811ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:23:18.670212  292013 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21974-10722/kubeconfig
	I1126 20:23:18.672246  292013 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/kubeconfig: {Name:mk6fc897a3190afd853d8f239c80394df10dbd39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:23:18.672794  292013 config.go:182] Loaded profile config "default-k8s-diff-port-178152": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:23:18.672844  292013 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1126 20:23:18.672980  292013 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1126 20:23:18.673056  292013 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-178152"
	I1126 20:23:18.673072  292013 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-178152"
	W1126 20:23:18.673080  292013 addons.go:248] addon storage-provisioner should already be in state true
	I1126 20:23:18.673108  292013 host.go:66] Checking if "default-k8s-diff-port-178152" exists ...
	I1126 20:23:18.673596  292013 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-178152 --format={{.State.Status}}
	I1126 20:23:18.673682  292013 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-178152"
	I1126 20:23:18.673710  292013 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-178152"
	W1126 20:23:18.673722  292013 addons.go:248] addon dashboard should already be in state true
	I1126 20:23:18.673756  292013 host.go:66] Checking if "default-k8s-diff-port-178152" exists ...
	I1126 20:23:18.673928  292013 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-178152"
	I1126 20:23:18.673951  292013 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-178152"
	I1126 20:23:18.674245  292013 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-178152 --format={{.State.Status}}
	I1126 20:23:18.674255  292013 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-178152 --format={{.State.Status}}
	I1126 20:23:18.675133  292013 out.go:179] * Verifying Kubernetes components...
	I1126 20:23:18.676193  292013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:23:18.701859  292013 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1126 20:23:18.701926  292013 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1126 20:23:18.703240  292013 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:23:18.703295  292013 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1126 20:23:18.703350  292013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-178152
	I1126 20:23:18.703745  292013 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1126 20:23:17.693567  279050 pod_ready.go:94] pod "coredns-66bc5c9577-wl4xp" is "Ready"
	I1126 20:23:17.693591  279050 pod_ready.go:86] duration metric: took 35.505181868s for pod "coredns-66bc5c9577-wl4xp" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:17.696095  279050 pod_ready.go:83] waiting for pod "etcd-no-preload-026579" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:17.713173  279050 pod_ready.go:94] pod "etcd-no-preload-026579" is "Ready"
	I1126 20:23:17.713232  279050 pod_ready.go:86] duration metric: took 17.078305ms for pod "etcd-no-preload-026579" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:17.718741  279050 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-026579" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:17.723995  279050 pod_ready.go:94] pod "kube-apiserver-no-preload-026579" is "Ready"
	I1126 20:23:17.724017  279050 pod_ready.go:86] duration metric: took 5.252182ms for pod "kube-apiserver-no-preload-026579" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:17.726428  279050 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-026579" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:17.894598  279050 pod_ready.go:94] pod "kube-controller-manager-no-preload-026579" is "Ready"
	I1126 20:23:17.894629  279050 pod_ready.go:86] duration metric: took 168.177715ms for pod "kube-controller-manager-no-preload-026579" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:18.091824  279050 pod_ready.go:83] waiting for pod "kube-proxy-ktbwp" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:18.492571  279050 pod_ready.go:94] pod "kube-proxy-ktbwp" is "Ready"
	I1126 20:23:18.492601  279050 pod_ready.go:86] duration metric: took 400.748457ms for pod "kube-proxy-ktbwp" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:18.693343  279050 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-026579" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:19.091809  279050 pod_ready.go:94] pod "kube-scheduler-no-preload-026579" is "Ready"
	I1126 20:23:19.091845  279050 pod_ready.go:86] duration metric: took 398.476699ms for pod "kube-scheduler-no-preload-026579" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:19.091860  279050 pod_ready.go:40] duration metric: took 36.906405377s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 20:23:19.153238  279050 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1126 20:23:19.155165  279050 out.go:179] * Done! kubectl is now configured to use "no-preload-026579" cluster and "default" namespace by default
	I1126 20:23:18.705569  292013 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-178152"
	W1126 20:23:18.705587  292013 addons.go:248] addon default-storageclass should already be in state true
	I1126 20:23:18.705612  292013 host.go:66] Checking if "default-k8s-diff-port-178152" exists ...
	I1126 20:23:18.706157  292013 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-178152 --format={{.State.Status}}
	I1126 20:23:18.706657  292013 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1126 20:23:18.706739  292013 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1126 20:23:18.706807  292013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-178152
	I1126 20:23:18.739172  292013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/default-k8s-diff-port-178152/id_rsa Username:docker}
	I1126 20:23:18.742081  292013 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1126 20:23:18.742144  292013 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1126 20:23:18.742203  292013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-178152
	I1126 20:23:18.743125  292013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/default-k8s-diff-port-178152/id_rsa Username:docker}
	I1126 20:23:18.771281  292013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/default-k8s-diff-port-178152/id_rsa Username:docker}
	I1126 20:23:18.836808  292013 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:23:18.849945  292013 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-178152" to be "Ready" ...
	I1126 20:23:18.858581  292013 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1126 20:23:18.858600  292013 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1126 20:23:18.864903  292013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:23:18.873985  292013 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1126 20:23:18.874003  292013 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1126 20:23:18.891785  292013 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1126 20:23:18.891800  292013 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1126 20:23:18.898868  292013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1126 20:23:18.914781  292013 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1126 20:23:18.914799  292013 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1126 20:23:18.940507  292013 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1126 20:23:18.940588  292013 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1126 20:23:18.961370  292013 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1126 20:23:18.961480  292013 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1126 20:23:18.979847  292013 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1126 20:23:18.979869  292013 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1126 20:23:18.997450  292013 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1126 20:23:18.997496  292013 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1126 20:23:19.014774  292013 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1126 20:23:19.014798  292013 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1126 20:23:19.030627  292013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1126 20:23:20.076513  292013 node_ready.go:49] node "default-k8s-diff-port-178152" is "Ready"
	I1126 20:23:20.076545  292013 node_ready.go:38] duration metric: took 1.226568266s for node "default-k8s-diff-port-178152" to be "Ready" ...
	I1126 20:23:20.076561  292013 api_server.go:52] waiting for apiserver process to appear ...
	I1126 20:23:20.076614  292013 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:23:20.650346  292013 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.785411832s)
	I1126 20:23:20.650423  292013 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.751538841s)
	I1126 20:23:20.650697  292013 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.620030532s)
	I1126 20:23:20.650744  292013 api_server.go:72] duration metric: took 1.977874686s to wait for apiserver process to appear ...
	I1126 20:23:20.650766  292013 api_server.go:88] waiting for apiserver healthz status ...
	I1126 20:23:20.650789  292013 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1126 20:23:20.652272  292013 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-178152 addons enable metrics-server
	
	I1126 20:23:20.655372  292013 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1126 20:23:20.655401  292013 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1126 20:23:20.659424  292013 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1126 20:23:20.660341  292013 addons.go:530] duration metric: took 1.987365178s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1126 20:23:21.151632  292013 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1126 20:23:21.157395  292013 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1126 20:23:21.157415  292013 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1126 20:23:18.885333  290654 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1126 20:23:19.301808  290654 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1126 20:23:19.695191  290654 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1126 20:23:19.695440  290654 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1126 20:23:19.825600  290654 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1126 20:23:20.340649  290654 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1126 20:23:20.724366  290654 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1126 20:23:21.485824  290654 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1126 20:23:21.625826  290654 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1126 20:23:21.626296  290654 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1126 20:23:21.629820  290654 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1126 20:23:18.214208  281230 pod_ready.go:104] pod "coredns-66bc5c9577-s8rrr" is not "Ready", error: <nil>
	W1126 20:23:20.709235  281230 pod_ready.go:104] pod "coredns-66bc5c9577-s8rrr" is not "Ready", error: <nil>
	I1126 20:23:21.631094  290654 out.go:252]   - Booting up control plane ...
	I1126 20:23:21.631238  290654 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1126 20:23:21.631371  290654 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1126 20:23:21.632360  290654 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1126 20:23:21.645214  290654 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1126 20:23:21.645361  290654 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1126 20:23:21.652406  290654 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1126 20:23:21.652729  290654 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1126 20:23:21.652815  290654 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1126 20:23:21.764903  290654 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1126 20:23:21.765102  290654 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1126 20:23:22.766639  290654 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001829324s
	I1126 20:23:22.771587  290654 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1126 20:23:22.771713  290654 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1126 20:23:22.771850  290654 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1126 20:23:22.771976  290654 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1126 20:23:22.710254  281230 pod_ready.go:104] pod "coredns-66bc5c9577-s8rrr" is not "Ready", error: <nil>
	I1126 20:23:23.209497  281230 pod_ready.go:94] pod "coredns-66bc5c9577-s8rrr" is "Ready"
	I1126 20:23:23.209526  281230 pod_ready.go:86] duration metric: took 35.005140298s for pod "coredns-66bc5c9577-s8rrr" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:23.212056  281230 pod_ready.go:83] waiting for pod "etcd-embed-certs-949294" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:23.215893  281230 pod_ready.go:94] pod "etcd-embed-certs-949294" is "Ready"
	I1126 20:23:23.215912  281230 pod_ready.go:86] duration metric: took 3.835439ms for pod "etcd-embed-certs-949294" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:23.217794  281230 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-949294" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:23.221490  281230 pod_ready.go:94] pod "kube-apiserver-embed-certs-949294" is "Ready"
	I1126 20:23:23.221507  281230 pod_ready.go:86] duration metric: took 3.693704ms for pod "kube-apiserver-embed-certs-949294" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:23.223412  281230 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-949294" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:23.408291  281230 pod_ready.go:94] pod "kube-controller-manager-embed-certs-949294" is "Ready"
	I1126 20:23:23.408318  281230 pod_ready.go:86] duration metric: took 184.882309ms for pod "kube-controller-manager-embed-certs-949294" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:23.608513  281230 pod_ready.go:83] waiting for pod "kube-proxy-qnjvr" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:24.008474  281230 pod_ready.go:94] pod "kube-proxy-qnjvr" is "Ready"
	I1126 20:23:24.008506  281230 pod_ready.go:86] duration metric: took 399.965276ms for pod "kube-proxy-qnjvr" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:24.207557  281230 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-949294" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:24.607949  281230 pod_ready.go:94] pod "kube-scheduler-embed-certs-949294" is "Ready"
	I1126 20:23:24.607973  281230 pod_ready.go:86] duration metric: took 400.390059ms for pod "kube-scheduler-embed-certs-949294" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:24.607985  281230 pod_ready.go:40] duration metric: took 36.408614043s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 20:23:24.660574  281230 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1126 20:23:24.662064  281230 out.go:179] * Done! kubectl is now configured to use "embed-certs-949294" cluster and "default" namespace by default
	I1126 20:23:21.651516  292013 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1126 20:23:21.655923  292013 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1126 20:23:21.656902  292013 api_server.go:141] control plane version: v1.34.1
	I1126 20:23:21.656929  292013 api_server.go:131] duration metric: took 1.00615123s to wait for apiserver health ...
	I1126 20:23:21.656939  292013 system_pods.go:43] waiting for kube-system pods to appear ...
	I1126 20:23:21.660424  292013 system_pods.go:59] 8 kube-system pods found
	I1126 20:23:21.660494  292013 system_pods.go:61] "coredns-66bc5c9577-tpmmm" [20166f90-76ba-4092-aab9-29683f4fc146] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:23:21.660509  292013 system_pods.go:61] "etcd-default-k8s-diff-port-178152" [b1b7fbf2-3a62-4441-9f2a-1fc512882320] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1126 20:23:21.660522  292013 system_pods.go:61] "kindnet-bmzz2" [ad4ae092-70cd-48b0-9099-854ccce3329d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1126 20:23:21.660530  292013 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-178152" [57a5fbc3-1a74-4cd9-af68-a9cb4053ee40] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1126 20:23:21.660541  292013 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-178152" [41dc1851-5cc2-414e-8ef8-b25e098649c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1126 20:23:21.660553  292013 system_pods.go:61] "kube-proxy-vd7fp" [37371ddf-6fde-4f46-a877-97f4112ff1b2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1126 20:23:21.660563  292013 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-178152" [f1a6a26d-5146-4ffc-9bb3-20349686988f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1126 20:23:21.660573  292013 system_pods.go:61] "storage-provisioner" [0ed42547-a316-4970-b4e7-f2157c68ac06] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 20:23:21.660578  292013 system_pods.go:74] duration metric: took 3.633523ms to wait for pod list to return data ...
	I1126 20:23:21.660586  292013 default_sa.go:34] waiting for default service account to be created ...
	I1126 20:23:21.662722  292013 default_sa.go:45] found service account: "default"
	I1126 20:23:21.662739  292013 default_sa.go:55] duration metric: took 2.147793ms for default service account to be created ...
	I1126 20:23:21.662747  292013 system_pods.go:116] waiting for k8s-apps to be running ...
	I1126 20:23:21.665171  292013 system_pods.go:86] 8 kube-system pods found
	I1126 20:23:21.665193  292013 system_pods.go:89] "coredns-66bc5c9577-tpmmm" [20166f90-76ba-4092-aab9-29683f4fc146] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:23:21.665209  292013 system_pods.go:89] "etcd-default-k8s-diff-port-178152" [b1b7fbf2-3a62-4441-9f2a-1fc512882320] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1126 20:23:21.665224  292013 system_pods.go:89] "kindnet-bmzz2" [ad4ae092-70cd-48b0-9099-854ccce3329d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1126 20:23:21.665236  292013 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-178152" [57a5fbc3-1a74-4cd9-af68-a9cb4053ee40] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1126 20:23:21.665250  292013 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-178152" [41dc1851-5cc2-414e-8ef8-b25e098649c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1126 20:23:21.665260  292013 system_pods.go:89] "kube-proxy-vd7fp" [37371ddf-6fde-4f46-a877-97f4112ff1b2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1126 20:23:21.665271  292013 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-178152" [f1a6a26d-5146-4ffc-9bb3-20349686988f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1126 20:23:21.665282  292013 system_pods.go:89] "storage-provisioner" [0ed42547-a316-4970-b4e7-f2157c68ac06] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 20:23:21.665293  292013 system_pods.go:126] duration metric: took 2.539795ms to wait for k8s-apps to be running ...
	I1126 20:23:21.665305  292013 system_svc.go:44] waiting for kubelet service to be running ....
	I1126 20:23:21.665350  292013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:23:21.679704  292013 system_svc.go:56] duration metric: took 14.393906ms WaitForService to wait for kubelet
	I1126 20:23:21.679732  292013 kubeadm.go:587] duration metric: took 3.006859665s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1126 20:23:21.679763  292013 node_conditions.go:102] verifying NodePressure condition ...
	I1126 20:23:21.683714  292013 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1126 20:23:21.683746  292013 node_conditions.go:123] node cpu capacity is 8
	I1126 20:23:21.683761  292013 node_conditions.go:105] duration metric: took 3.992542ms to run NodePressure ...
	I1126 20:23:21.683776  292013 start.go:242] waiting for startup goroutines ...
	I1126 20:23:21.683787  292013 start.go:247] waiting for cluster config update ...
	I1126 20:23:21.683803  292013 start.go:256] writing updated cluster config ...
	I1126 20:23:21.684090  292013 ssh_runner.go:195] Run: rm -f paused
	I1126 20:23:21.690737  292013 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 20:23:21.694957  292013 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-tpmmm" in "kube-system" namespace to be "Ready" or be gone ...
	W1126 20:23:23.700019  292013 pod_ready.go:104] pod "coredns-66bc5c9577-tpmmm" is not "Ready", error: <nil>
	W1126 20:23:25.700369  292013 pod_ready.go:104] pod "coredns-66bc5c9577-tpmmm" is not "Ready", error: <nil>
	I1126 20:23:24.334563  290654 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.562900759s
	I1126 20:23:25.096556  290654 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.32501021s
	I1126 20:23:26.773126  290654 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001480312s
	I1126 20:23:26.785982  290654 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1126 20:23:26.795346  290654 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1126 20:23:26.803771  290654 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1126 20:23:26.804039  290654 kubeadm.go:319] [mark-control-plane] Marking the node auto-825702 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1126 20:23:26.811540  290654 kubeadm.go:319] [bootstrap-token] Using token: cfepsv.ze7li0ueqiisv4u1
	I1126 20:23:26.812735  290654 out.go:252]   - Configuring RBAC rules ...
	I1126 20:23:26.812902  290654 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1126 20:23:26.815933  290654 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1126 20:23:26.822050  290654 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1126 20:23:26.824581  290654 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1126 20:23:26.827827  290654 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1126 20:23:26.830088  290654 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1126 20:23:27.180011  290654 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1126 20:23:27.604196  290654 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1126 20:23:28.179792  290654 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1126 20:23:28.181097  290654 kubeadm.go:319] 
	I1126 20:23:28.181178  290654 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1126 20:23:28.181188  290654 kubeadm.go:319] 
	I1126 20:23:28.181271  290654 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1126 20:23:28.181284  290654 kubeadm.go:319] 
	I1126 20:23:28.181314  290654 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1126 20:23:28.181393  290654 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1126 20:23:28.181508  290654 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1126 20:23:28.181518  290654 kubeadm.go:319] 
	I1126 20:23:28.181588  290654 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1126 20:23:28.181599  290654 kubeadm.go:319] 
	I1126 20:23:28.181662  290654 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1126 20:23:28.181671  290654 kubeadm.go:319] 
	I1126 20:23:28.181740  290654 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1126 20:23:28.181890  290654 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1126 20:23:28.181992  290654 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1126 20:23:28.182004  290654 kubeadm.go:319] 
	I1126 20:23:28.182118  290654 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1126 20:23:28.182257  290654 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1126 20:23:28.182277  290654 kubeadm.go:319] 
	I1126 20:23:28.182389  290654 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token cfepsv.ze7li0ueqiisv4u1 \
	I1126 20:23:28.182607  290654 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:9cd90c79a54686c211eb288f3e905de978807b759c474caa965189d211d6551b \
	I1126 20:23:28.182665  290654 kubeadm.go:319] 	--control-plane 
	I1126 20:23:28.182683  290654 kubeadm.go:319] 
	I1126 20:23:28.182781  290654 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1126 20:23:28.182794  290654 kubeadm.go:319] 
	I1126 20:23:28.182920  290654 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token cfepsv.ze7li0ueqiisv4u1 \
	I1126 20:23:28.183058  290654 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:9cd90c79a54686c211eb288f3e905de978807b759c474caa965189d211d6551b 
	I1126 20:23:28.186330  290654 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1126 20:23:28.186520  290654 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1126 20:23:28.186555  290654 cni.go:84] Creating CNI manager for ""
	I1126 20:23:28.186568  290654 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:23:28.189613  290654 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1126 20:23:27.701613  292013 pod_ready.go:104] pod "coredns-66bc5c9577-tpmmm" is not "Ready", error: <nil>
	W1126 20:23:30.200387  292013 pod_ready.go:104] pod "coredns-66bc5c9577-tpmmm" is not "Ready", error: <nil>
	I1126 20:23:28.190997  290654 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1126 20:23:28.196682  290654 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1126 20:23:28.196700  290654 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1126 20:23:28.212764  290654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1126 20:23:28.451574  290654 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1126 20:23:28.451657  290654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:23:28.451736  290654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-825702 minikube.k8s.io/updated_at=2025_11_26T20_23_28_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970 minikube.k8s.io/name=auto-825702 minikube.k8s.io/primary=true
	I1126 20:23:28.594872  290654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:23:28.594872  290654 ops.go:34] apiserver oom_adj: -16
	I1126 20:23:29.095986  290654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:23:29.595806  290654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:23:30.095675  290654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:23:30.595668  290654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:23:31.095846  290654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:23:31.595663  290654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:23:32.095085  290654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:23:32.595453  290654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:23:33.095688  290654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:23:33.186891  290654 kubeadm.go:1114] duration metric: took 4.735299943s to wait for elevateKubeSystemPrivileges
	I1126 20:23:33.187041  290654 kubeadm.go:403] duration metric: took 18.004754645s to StartCluster
	I1126 20:23:33.187069  290654 settings.go:142] acquiring lock: {Name:mkeab3f1dbcfb88a16fda32bff13c520a9a811ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:23:33.187159  290654 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21974-10722/kubeconfig
	I1126 20:23:33.189959  290654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/kubeconfig: {Name:mk6fc897a3190afd853d8f239c80394df10dbd39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:23:33.190264  290654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1126 20:23:33.190276  290654 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1126 20:23:33.190348  290654 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1126 20:23:33.190418  290654 addons.go:70] Setting storage-provisioner=true in profile "auto-825702"
	I1126 20:23:33.190430  290654 addons.go:239] Setting addon storage-provisioner=true in "auto-825702"
	I1126 20:23:33.190452  290654 host.go:66] Checking if "auto-825702" exists ...
	I1126 20:23:33.190569  290654 config.go:182] Loaded profile config "auto-825702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:23:33.190656  290654 addons.go:70] Setting default-storageclass=true in profile "auto-825702"
	I1126 20:23:33.190674  290654 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-825702"
	I1126 20:23:33.190997  290654 cli_runner.go:164] Run: docker container inspect auto-825702 --format={{.State.Status}}
	I1126 20:23:33.191067  290654 cli_runner.go:164] Run: docker container inspect auto-825702 --format={{.State.Status}}
	I1126 20:23:33.191443  290654 out.go:179] * Verifying Kubernetes components...
	I1126 20:23:33.193928  290654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:23:33.219111  290654 addons.go:239] Setting addon default-storageclass=true in "auto-825702"
	I1126 20:23:33.219159  290654 host.go:66] Checking if "auto-825702" exists ...
	I1126 20:23:33.219759  290654 cli_runner.go:164] Run: docker container inspect auto-825702 --format={{.State.Status}}
	I1126 20:23:33.220320  290654 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1126 20:23:33.221988  290654 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:23:33.222009  290654 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1126 20:23:33.222065  290654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-825702
	I1126 20:23:33.245865  290654 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/auto-825702/id_rsa Username:docker}
	I1126 20:23:33.249499  290654 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1126 20:23:33.249519  290654 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1126 20:23:33.249591  290654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-825702
	I1126 20:23:33.273726  290654 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/auto-825702/id_rsa Username:docker}
	I1126 20:23:33.288579  290654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1126 20:23:33.352015  290654 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:23:33.372793  290654 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:23:33.394803  290654 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1126 20:23:33.477827  290654 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1126 20:23:33.480376  290654 node_ready.go:35] waiting up to 15m0s for node "auto-825702" to be "Ready" ...
	I1126 20:23:33.693090  290654 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1126 20:23:32.202348  292013 pod_ready.go:104] pod "coredns-66bc5c9577-tpmmm" is not "Ready", error: <nil>
	W1126 20:23:34.699921  292013 pod_ready.go:104] pod "coredns-66bc5c9577-tpmmm" is not "Ready", error: <nil>
	I1126 20:23:33.694069  290654 addons.go:530] duration metric: took 503.718553ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1126 20:23:33.983227  290654 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-825702" context rescaled to 1 replicas
	W1126 20:23:35.484973  290654 node_ready.go:57] node "auto-825702" has "Ready":"False" status (will retry)
	W1126 20:23:37.982833  290654 node_ready.go:57] node "auto-825702" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 26 20:23:08 embed-certs-949294 crio[564]: time="2025-11-26T20:23:08.140292229Z" level=info msg="Started container" PID=1741 containerID=020ae747188435739a619b6aad1c7ab4f360ef35d0eeb58db07dbc9a8d1f40c5 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lz5p9/dashboard-metrics-scraper id=1661ef3e-e7d7-428b-98ce-ff29e1fdb991 name=/runtime.v1.RuntimeService/StartContainer sandboxID=32f7ccb83cb61788747d40160ba4dbe5377423417cb8388ccb0197c805fdd8ba
	Nov 26 20:23:08 embed-certs-949294 crio[564]: time="2025-11-26T20:23:08.189201194Z" level=info msg="Removing container: d1439c7b5d56bfbe046ea963b327db2e10f16f89500515bb1f9347b76aa256bc" id=598784d1-840e-4118-9808-00e9f88a2a82 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 26 20:23:08 embed-certs-949294 crio[564]: time="2025-11-26T20:23:08.220061688Z" level=info msg="Removed container d1439c7b5d56bfbe046ea963b327db2e10f16f89500515bb1f9347b76aa256bc: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lz5p9/dashboard-metrics-scraper" id=598784d1-840e-4118-9808-00e9f88a2a82 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 26 20:23:18 embed-certs-949294 crio[564]: time="2025-11-26T20:23:18.224006258Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a88eede4-24fd-48bb-b693-73f1db0d33f5 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:23:18 embed-certs-949294 crio[564]: time="2025-11-26T20:23:18.226259529Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=01edc010-9cd5-43e5-b4df-991b1ccb9aa4 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:23:18 embed-certs-949294 crio[564]: time="2025-11-26T20:23:18.22818816Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=5f095170-c08a-4dd6-bfdd-30781522c0f5 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:23:18 embed-certs-949294 crio[564]: time="2025-11-26T20:23:18.228309158Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:23:18 embed-certs-949294 crio[564]: time="2025-11-26T20:23:18.235136324Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:23:18 embed-certs-949294 crio[564]: time="2025-11-26T20:23:18.235332023Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/8b152c78016a55cdfb17358d9166992b5a8c4555d0390b0dc22c5b95740230c2/merged/etc/passwd: no such file or directory"
	Nov 26 20:23:18 embed-certs-949294 crio[564]: time="2025-11-26T20:23:18.235469733Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/8b152c78016a55cdfb17358d9166992b5a8c4555d0390b0dc22c5b95740230c2/merged/etc/group: no such file or directory"
	Nov 26 20:23:18 embed-certs-949294 crio[564]: time="2025-11-26T20:23:18.235867315Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:23:18 embed-certs-949294 crio[564]: time="2025-11-26T20:23:18.261876141Z" level=info msg="Created container ae31bf0fea5ac585f473d8f059b10f05dfe486cea4008bbd9381863883dfa02f: kube-system/storage-provisioner/storage-provisioner" id=5f095170-c08a-4dd6-bfdd-30781522c0f5 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:23:18 embed-certs-949294 crio[564]: time="2025-11-26T20:23:18.262435215Z" level=info msg="Starting container: ae31bf0fea5ac585f473d8f059b10f05dfe486cea4008bbd9381863883dfa02f" id=6dbbe136-74c2-4d9d-acc8-b3ce74bfdc3e name=/runtime.v1.RuntimeService/StartContainer
	Nov 26 20:23:18 embed-certs-949294 crio[564]: time="2025-11-26T20:23:18.264508204Z" level=info msg="Started container" PID=1755 containerID=ae31bf0fea5ac585f473d8f059b10f05dfe486cea4008bbd9381863883dfa02f description=kube-system/storage-provisioner/storage-provisioner id=6dbbe136-74c2-4d9d-acc8-b3ce74bfdc3e name=/runtime.v1.RuntimeService/StartContainer sandboxID=ab3ec70812556a26a2cd10250abf4e2273aa7627a85e7f389f353386a314d3eb
	Nov 26 20:23:30 embed-certs-949294 crio[564]: time="2025-11-26T20:23:30.08888312Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=10f19a12-718b-4079-9b41-fa2c3c5c26fa name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:23:30 embed-certs-949294 crio[564]: time="2025-11-26T20:23:30.090201465Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=dc1a5c3b-5f75-4b04-a302-743d938c8946 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:23:30 embed-certs-949294 crio[564]: time="2025-11-26T20:23:30.091318552Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lz5p9/dashboard-metrics-scraper" id=4174d35a-03a6-423e-8ea3-5b48858f8ca1 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:23:30 embed-certs-949294 crio[564]: time="2025-11-26T20:23:30.091446643Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:23:30 embed-certs-949294 crio[564]: time="2025-11-26T20:23:30.100087787Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:23:30 embed-certs-949294 crio[564]: time="2025-11-26T20:23:30.100825465Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:23:30 embed-certs-949294 crio[564]: time="2025-11-26T20:23:30.143627036Z" level=info msg="Created container b05705dc359bb9d69e6e46ca5d20f8b41a1d58b1b5bdb3bb6aad951eb6e902a2: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lz5p9/dashboard-metrics-scraper" id=4174d35a-03a6-423e-8ea3-5b48858f8ca1 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:23:30 embed-certs-949294 crio[564]: time="2025-11-26T20:23:30.144258632Z" level=info msg="Starting container: b05705dc359bb9d69e6e46ca5d20f8b41a1d58b1b5bdb3bb6aad951eb6e902a2" id=38a7ba88-cb4b-4a03-a664-14bfebf30573 name=/runtime.v1.RuntimeService/StartContainer
	Nov 26 20:23:30 embed-certs-949294 crio[564]: time="2025-11-26T20:23:30.146687409Z" level=info msg="Started container" PID=1789 containerID=b05705dc359bb9d69e6e46ca5d20f8b41a1d58b1b5bdb3bb6aad951eb6e902a2 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lz5p9/dashboard-metrics-scraper id=38a7ba88-cb4b-4a03-a664-14bfebf30573 name=/runtime.v1.RuntimeService/StartContainer sandboxID=32f7ccb83cb61788747d40160ba4dbe5377423417cb8388ccb0197c805fdd8ba
	Nov 26 20:23:30 embed-certs-949294 crio[564]: time="2025-11-26T20:23:30.267541083Z" level=info msg="Removing container: 020ae747188435739a619b6aad1c7ab4f360ef35d0eeb58db07dbc9a8d1f40c5" id=368bc918-c870-422e-a070-add8b2a59b21 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 26 20:23:30 embed-certs-949294 crio[564]: time="2025-11-26T20:23:30.277904129Z" level=info msg="Removed container 020ae747188435739a619b6aad1c7ab4f360ef35d0eeb58db07dbc9a8d1f40c5: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lz5p9/dashboard-metrics-scraper" id=368bc918-c870-422e-a070-add8b2a59b21 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	b05705dc359bb       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           9 seconds ago       Exited              dashboard-metrics-scraper   3                   32f7ccb83cb61       dashboard-metrics-scraper-6ffb444bf9-lz5p9   kubernetes-dashboard
	ae31bf0fea5ac       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           21 seconds ago      Running             storage-provisioner         1                   ab3ec70812556       storage-provisioner                          kube-system
	3a8d799bf870a       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   41 seconds ago      Running             kubernetes-dashboard        0                   afbd06de88e28       kubernetes-dashboard-855c9754f9-8dsr7        kubernetes-dashboard
	93176a1ab732c       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           52 seconds ago      Running             busybox                     1                   fb11f0e1300b7       busybox                                      default
	cfe8d29a25b15       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           52 seconds ago      Running             coredns                     0                   e6bede0dbd68e       coredns-66bc5c9577-s8rrr                     kube-system
	c8864fe978873       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           52 seconds ago      Exited              storage-provisioner         0                   ab3ec70812556       storage-provisioner                          kube-system
	be832c3cf0e3e       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           52 seconds ago      Running             kube-proxy                  0                   fc0a7c56888bc       kube-proxy-qnjvr                             kube-system
	365c78f7ca471       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           52 seconds ago      Running             kindnet-cni                 0                   546338848a0b9       kindnet-9546l                                kube-system
	d0edb51e9e5fc       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           55 seconds ago      Running             kube-apiserver              0                   d438afad05edc       kube-apiserver-embed-certs-949294            kube-system
	1247796ab1281       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           55 seconds ago      Running             kube-scheduler              0                   75490021d8a88       kube-scheduler-embed-certs-949294            kube-system
	f265ea81a0961       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           55 seconds ago      Running             kube-controller-manager     0                   1f2a6a52b674a       kube-controller-manager-embed-certs-949294   kube-system
	27f1868f6da52       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           55 seconds ago      Running             etcd                        0                   cf4385ce2aa61       etcd-embed-certs-949294                      kube-system
	
	
	==> coredns [cfe8d29a25b15896fe250c2367ea5bf40c17ce3f9aa972b6a3000afe2cea2ba4] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] 127.0.0.1:35918 - 19879 "HINFO IN 4178472970200051923.8652175570046282908. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.456862561s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-949294
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-949294
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970
	                    minikube.k8s.io/name=embed-certs-949294
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_26T20_21_49_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 26 Nov 2025 20:21:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-949294
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 26 Nov 2025 20:23:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 26 Nov 2025 20:23:17 +0000   Wed, 26 Nov 2025 20:21:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 26 Nov 2025 20:23:17 +0000   Wed, 26 Nov 2025 20:21:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 26 Nov 2025 20:23:17 +0000   Wed, 26 Nov 2025 20:21:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 26 Nov 2025 20:23:17 +0000   Wed, 26 Nov 2025 20:22:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-949294
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                aa80874f-b877-4d80-93ab-b99d96f2b5aa
	  Boot ID:                    4ab83601-3d95-4490-96bc-14b416ba2714
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-66bc5c9577-s8rrr                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     105s
	  kube-system                 etcd-embed-certs-949294                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         111s
	  kube-system                 kindnet-9546l                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      106s
	  kube-system                 kube-apiserver-embed-certs-949294             250m (3%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-controller-manager-embed-certs-949294    200m (2%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-proxy-qnjvr                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-scheduler-embed-certs-949294             100m (1%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-lz5p9    0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-8dsr7         0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 104s               kube-proxy       
	  Normal  Starting                 52s                kube-proxy       
	  Normal  NodeHasSufficientMemory  111s               kubelet          Node embed-certs-949294 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    111s               kubelet          Node embed-certs-949294 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     111s               kubelet          Node embed-certs-949294 status is now: NodeHasSufficientPID
	  Normal  Starting                 111s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           106s               node-controller  Node embed-certs-949294 event: Registered Node embed-certs-949294 in Controller
	  Normal  NodeReady                94s                kubelet          Node embed-certs-949294 status is now: NodeReady
	  Normal  Starting                 55s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  55s (x8 over 55s)  kubelet          Node embed-certs-949294 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    55s (x8 over 55s)  kubelet          Node embed-certs-949294 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     55s (x8 over 55s)  kubelet          Node embed-certs-949294 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           49s                node-controller  Node embed-certs-949294 event: Registered Node embed-certs-949294 in Controller
	
	
	==> dmesg <==
	[  +0.091832] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023778] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.010162] kauditd_printk_skb: 47 callbacks suppressed
	[Nov26 19:37] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.011177] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.022896] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.024869] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.022915] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.023864] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +2.047793] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +4.032568] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +8.126198] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[ +16.382331] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[Nov26 19:38] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	
	
	==> etcd [27f1868f6da521ee9bf27fc5eccba3b561b07052f526f759dfbebcc382b682e0] <==
	{"level":"warn","ts":"2025-11-26T20:22:45.931881Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:45.938438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:45.945428Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:45.954308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:45.961834Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:45.970105Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:45.977542Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:45.986273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:45.995905Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:46.015109Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:46.026256Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:46.048824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:46.056401Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:46.065080Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:46.071709Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:46.079246Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:46.091828Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:46.099869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:46.108086Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:46.123919Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:46.130128Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:46.136454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:46.182308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:23:07.323522Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"117.109827ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-s8rrr\" limit:1 ","response":"range_response_count:1 size:5934"}
	{"level":"info","ts":"2025-11-26T20:23:07.323628Z","caller":"traceutil/trace.go:172","msg":"trace[1741218818] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-s8rrr; range_end:; response_count:1; response_revision:606; }","duration":"117.234691ms","start":"2025-11-26T20:23:07.206378Z","end":"2025-11-26T20:23:07.323613Z","steps":["trace[1741218818] 'range keys from in-memory index tree'  (duration: 116.958118ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:23:39 up  1:06,  0 user,  load average: 3.28, 3.15, 2.13
	Linux embed-certs-949294 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [365c78f7ca47179e6d1e36f011f96db8f3f25d8d01ed886707e3e02c4beb2040] <==
	I1126 20:22:47.690921       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1126 20:22:47.691171       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1126 20:22:47.691313       1 main.go:148] setting mtu 1500 for CNI 
	I1126 20:22:47.691326       1 main.go:178] kindnetd IP family: "ipv4"
	I1126 20:22:47.691349       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-26T20:22:47Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1126 20:22:47.895696       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1126 20:22:47.895723       1 controller.go:381] "Waiting for informer caches to sync"
	I1126 20:22:47.895737       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1126 20:22:47.895867       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1126 20:22:48.196991       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1126 20:22:48.197027       1 metrics.go:72] Registering metrics
	I1126 20:22:48.197113       1 controller.go:711] "Syncing nftables rules"
	I1126 20:22:57.895597       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1126 20:22:57.895686       1 main.go:301] handling current node
	I1126 20:23:07.898533       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1126 20:23:07.898567       1 main.go:301] handling current node
	I1126 20:23:17.895200       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1126 20:23:17.895227       1 main.go:301] handling current node
	I1126 20:23:27.898557       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1126 20:23:27.898593       1 main.go:301] handling current node
	I1126 20:23:37.895630       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1126 20:23:37.895656       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d0edb51e9e5fccff0dc7134b628a507303eb3f0ea693960b2ef07c819ccfcfb3] <==
	I1126 20:22:46.865107       1 autoregister_controller.go:144] Starting autoregister controller
	I1126 20:22:46.865128       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1126 20:22:46.865142       1 cache.go:39] Caches are synced for autoregister controller
	I1126 20:22:46.867391       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1126 20:22:46.874577       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	E1126 20:22:46.879297       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1126 20:22:46.895881       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1126 20:22:46.906037       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1126 20:22:46.906072       1 policy_source.go:240] refreshing policies
	I1126 20:22:46.906742       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1126 20:22:46.945271       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1126 20:22:46.945368       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1126 20:22:46.945387       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1126 20:22:46.952973       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1126 20:22:47.189013       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1126 20:22:47.243758       1 controller.go:667] quota admission added evaluator for: namespaces
	I1126 20:22:47.377424       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1126 20:22:47.407615       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1126 20:22:47.424757       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1126 20:22:47.523942       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.67.8"}
	I1126 20:22:47.539945       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.175.98"}
	I1126 20:22:47.751441       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1126 20:22:50.474497       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1126 20:22:50.672414       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1126 20:22:50.770861       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [f265ea81a09610c82177779173f228ed110d405daaad945e9224abda5afc655e] <==
	I1126 20:22:50.221408       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1126 20:22:50.221438       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1126 20:22:50.221505       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1126 20:22:50.221809       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1126 20:22:50.223742       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1126 20:22:50.223796       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1126 20:22:50.223814       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1126 20:22:50.224946       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1126 20:22:50.227542       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1126 20:22:50.230600       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1126 20:22:50.254293       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1126 20:22:50.259955       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1126 20:22:50.264420       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1126 20:22:50.268739       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1126 20:22:50.269964       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1126 20:22:50.269982       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1126 20:22:50.269991       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1126 20:22:50.270779       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1126 20:22:50.274814       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1126 20:22:50.278514       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1126 20:22:50.283514       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1126 20:22:50.283662       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1126 20:22:50.283794       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-949294"
	I1126 20:22:50.283882       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1126 20:22:50.319028       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	
	
	==> kube-proxy [be832c3cf0e3ebddcc1aa2777d57e2d38d2836d83f9be1d7f3aeba656fd95381] <==
	I1126 20:22:47.540724       1 server_linux.go:53] "Using iptables proxy"
	I1126 20:22:47.613580       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1126 20:22:47.714551       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1126 20:22:47.714591       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1126 20:22:47.714687       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1126 20:22:47.734522       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1126 20:22:47.734597       1 server_linux.go:132] "Using iptables Proxier"
	I1126 20:22:47.740993       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1126 20:22:47.741490       1 server.go:527] "Version info" version="v1.34.1"
	I1126 20:22:47.741525       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 20:22:47.745258       1 config.go:309] "Starting node config controller"
	I1126 20:22:47.745324       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1126 20:22:47.745431       1 config.go:403] "Starting serviceCIDR config controller"
	I1126 20:22:47.745504       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1126 20:22:47.745780       1 config.go:200] "Starting service config controller"
	I1126 20:22:47.745803       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1126 20:22:47.745841       1 config.go:106] "Starting endpoint slice config controller"
	I1126 20:22:47.745855       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1126 20:22:47.845974       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1126 20:22:47.845970       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1126 20:22:47.846018       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1126 20:22:47.846029       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1247796ab1281fb5ee5525e146ff9c03a80b5e36662073b88f7b4335b21630fa] <==
	I1126 20:22:45.338294       1 serving.go:386] Generated self-signed cert in-memory
	W1126 20:22:46.809208       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1126 20:22:46.809260       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1126 20:22:46.809272       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1126 20:22:46.809281       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1126 20:22:46.865537       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1126 20:22:46.865570       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 20:22:46.870823       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1126 20:22:46.874677       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1126 20:22:46.871605       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1126 20:22:46.871645       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1126 20:22:46.975041       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 26 20:22:55 embed-certs-949294 kubelet[730]: I1126 20:22:55.154257     730 scope.go:117] "RemoveContainer" containerID="d1439c7b5d56bfbe046ea963b327db2e10f16f89500515bb1f9347b76aa256bc"
	Nov 26 20:22:55 embed-certs-949294 kubelet[730]: E1126 20:22:55.155126     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lz5p9_kubernetes-dashboard(527424d4-133e-4351-b73f-c777f0d44483)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lz5p9" podUID="527424d4-133e-4351-b73f-c777f0d44483"
	Nov 26 20:22:56 embed-certs-949294 kubelet[730]: I1126 20:22:56.157322     730 scope.go:117] "RemoveContainer" containerID="d1439c7b5d56bfbe046ea963b327db2e10f16f89500515bb1f9347b76aa256bc"
	Nov 26 20:22:56 embed-certs-949294 kubelet[730]: E1126 20:22:56.157545     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lz5p9_kubernetes-dashboard(527424d4-133e-4351-b73f-c777f0d44483)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lz5p9" podUID="527424d4-133e-4351-b73f-c777f0d44483"
	Nov 26 20:22:57 embed-certs-949294 kubelet[730]: I1126 20:22:57.159438     730 scope.go:117] "RemoveContainer" containerID="d1439c7b5d56bfbe046ea963b327db2e10f16f89500515bb1f9347b76aa256bc"
	Nov 26 20:22:57 embed-certs-949294 kubelet[730]: E1126 20:22:57.159695     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lz5p9_kubernetes-dashboard(527424d4-133e-4351-b73f-c777f0d44483)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lz5p9" podUID="527424d4-133e-4351-b73f-c777f0d44483"
	Nov 26 20:22:59 embed-certs-949294 kubelet[730]: I1126 20:22:59.178307     730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8dsr7" podStartSLOduration=2.193407743 podStartE2EDuration="9.178288191s" podCreationTimestamp="2025-11-26 20:22:50 +0000 UTC" firstStartedPulling="2025-11-26 20:22:51.207850348 +0000 UTC m=+7.206058738" lastFinishedPulling="2025-11-26 20:22:58.192730787 +0000 UTC m=+14.190939186" observedRunningTime="2025-11-26 20:22:59.178110237 +0000 UTC m=+15.176318669" watchObservedRunningTime="2025-11-26 20:22:59.178288191 +0000 UTC m=+15.176496600"
	Nov 26 20:23:08 embed-certs-949294 kubelet[730]: I1126 20:23:08.088253     730 scope.go:117] "RemoveContainer" containerID="d1439c7b5d56bfbe046ea963b327db2e10f16f89500515bb1f9347b76aa256bc"
	Nov 26 20:23:08 embed-certs-949294 kubelet[730]: I1126 20:23:08.187814     730 scope.go:117] "RemoveContainer" containerID="d1439c7b5d56bfbe046ea963b327db2e10f16f89500515bb1f9347b76aa256bc"
	Nov 26 20:23:08 embed-certs-949294 kubelet[730]: I1126 20:23:08.188005     730 scope.go:117] "RemoveContainer" containerID="020ae747188435739a619b6aad1c7ab4f360ef35d0eeb58db07dbc9a8d1f40c5"
	Nov 26 20:23:08 embed-certs-949294 kubelet[730]: E1126 20:23:08.188224     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lz5p9_kubernetes-dashboard(527424d4-133e-4351-b73f-c777f0d44483)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lz5p9" podUID="527424d4-133e-4351-b73f-c777f0d44483"
	Nov 26 20:23:16 embed-certs-949294 kubelet[730]: I1126 20:23:16.055938     730 scope.go:117] "RemoveContainer" containerID="020ae747188435739a619b6aad1c7ab4f360ef35d0eeb58db07dbc9a8d1f40c5"
	Nov 26 20:23:16 embed-certs-949294 kubelet[730]: E1126 20:23:16.056141     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lz5p9_kubernetes-dashboard(527424d4-133e-4351-b73f-c777f0d44483)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lz5p9" podUID="527424d4-133e-4351-b73f-c777f0d44483"
	Nov 26 20:23:18 embed-certs-949294 kubelet[730]: I1126 20:23:18.223232     730 scope.go:117] "RemoveContainer" containerID="c8864fe9788731d2ce2001c0936defdb11739fcb35cd62e407e708947119748c"
	Nov 26 20:23:30 embed-certs-949294 kubelet[730]: I1126 20:23:30.088364     730 scope.go:117] "RemoveContainer" containerID="020ae747188435739a619b6aad1c7ab4f360ef35d0eeb58db07dbc9a8d1f40c5"
	Nov 26 20:23:30 embed-certs-949294 kubelet[730]: I1126 20:23:30.266206     730 scope.go:117] "RemoveContainer" containerID="020ae747188435739a619b6aad1c7ab4f360ef35d0eeb58db07dbc9a8d1f40c5"
	Nov 26 20:23:30 embed-certs-949294 kubelet[730]: I1126 20:23:30.266386     730 scope.go:117] "RemoveContainer" containerID="b05705dc359bb9d69e6e46ca5d20f8b41a1d58b1b5bdb3bb6aad951eb6e902a2"
	Nov 26 20:23:30 embed-certs-949294 kubelet[730]: E1126 20:23:30.266624     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lz5p9_kubernetes-dashboard(527424d4-133e-4351-b73f-c777f0d44483)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lz5p9" podUID="527424d4-133e-4351-b73f-c777f0d44483"
	Nov 26 20:23:36 embed-certs-949294 kubelet[730]: I1126 20:23:36.055535     730 scope.go:117] "RemoveContainer" containerID="b05705dc359bb9d69e6e46ca5d20f8b41a1d58b1b5bdb3bb6aad951eb6e902a2"
	Nov 26 20:23:36 embed-certs-949294 kubelet[730]: E1126 20:23:36.055777     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lz5p9_kubernetes-dashboard(527424d4-133e-4351-b73f-c777f0d44483)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lz5p9" podUID="527424d4-133e-4351-b73f-c777f0d44483"
	Nov 26 20:23:36 embed-certs-949294 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 26 20:23:36 embed-certs-949294 kubelet[730]: I1126 20:23:36.803886     730 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 26 20:23:36 embed-certs-949294 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 26 20:23:36 embed-certs-949294 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 26 20:23:36 embed-certs-949294 systemd[1]: kubelet.service: Consumed 1.689s CPU time.
	
	
	==> kubernetes-dashboard [3a8d799bf870a2ddb08ba11d6e10375455a0f1bff23f82425087bf768c717932] <==
	2025/11/26 20:22:58 Starting overwatch
	2025/11/26 20:22:58 Using namespace: kubernetes-dashboard
	2025/11/26 20:22:58 Using in-cluster config to connect to apiserver
	2025/11/26 20:22:58 Using secret token for csrf signing
	2025/11/26 20:22:58 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/26 20:22:58 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/26 20:22:58 Successful initial request to the apiserver, version: v1.34.1
	2025/11/26 20:22:58 Generating JWE encryption key
	2025/11/26 20:22:58 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/26 20:22:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/26 20:22:58 Initializing JWE encryption key from synchronized object
	2025/11/26 20:22:58 Creating in-cluster Sidecar client
	2025/11/26 20:22:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/26 20:22:58 Serving insecurely on HTTP port: 9090
	2025/11/26 20:23:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [ae31bf0fea5ac585f473d8f059b10f05dfe486cea4008bbd9381863883dfa02f] <==
	I1126 20:23:18.278538       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1126 20:23:18.287248       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1126 20:23:18.287302       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1126 20:23:18.289200       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:23:21.744382       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:23:26.005422       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:23:29.604913       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:23:32.659106       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:23:35.681820       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:23:35.686076       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1126 20:23:35.686248       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1126 20:23:35.686417       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-949294_07f263f2-0bfd-4721-a0c3-f0f3e2b5a8ca!
	I1126 20:23:35.686425       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"05f20356-e266-4bee-9af8-d671ea0ca424", APIVersion:"v1", ResourceVersion:"639", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-949294_07f263f2-0bfd-4721-a0c3-f0f3e2b5a8ca became leader
	W1126 20:23:35.688080       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:23:35.692089       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1126 20:23:35.786646       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-949294_07f263f2-0bfd-4721-a0c3-f0f3e2b5a8ca!
	W1126 20:23:37.695245       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:23:37.700410       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:23:39.703540       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:23:39.708039       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [c8864fe9788731d2ce2001c0936defdb11739fcb35cd62e407e708947119748c] <==
	I1126 20:22:47.516378       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1126 20:23:17.521603       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-949294 -n embed-certs-949294
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-949294 -n embed-certs-949294: exit status 2 (343.3676ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-949294 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-949294
helpers_test.go:243: (dbg) docker inspect embed-certs-949294:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "86fea694f6d23eb28c927f14e9521ffcaeb05561b1a903e3154464f7a4ba4430",
	        "Created": "2025-11-26T20:21:31.21255744Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 281602,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-26T20:22:37.846330329Z",
	            "FinishedAt": "2025-11-26T20:22:36.906774097Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/86fea694f6d23eb28c927f14e9521ffcaeb05561b1a903e3154464f7a4ba4430/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/86fea694f6d23eb28c927f14e9521ffcaeb05561b1a903e3154464f7a4ba4430/hostname",
	        "HostsPath": "/var/lib/docker/containers/86fea694f6d23eb28c927f14e9521ffcaeb05561b1a903e3154464f7a4ba4430/hosts",
	        "LogPath": "/var/lib/docker/containers/86fea694f6d23eb28c927f14e9521ffcaeb05561b1a903e3154464f7a4ba4430/86fea694f6d23eb28c927f14e9521ffcaeb05561b1a903e3154464f7a4ba4430-json.log",
	        "Name": "/embed-certs-949294",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-949294:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-949294",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "86fea694f6d23eb28c927f14e9521ffcaeb05561b1a903e3154464f7a4ba4430",
	                "LowerDir": "/var/lib/docker/overlay2/17c74d8eb31c82401cf3f989170353a9316d246a288a8fc84a26c55d35f4bff9-init/diff:/var/lib/docker/overlay2/1661d775281cb7314a654558ee2c4e5880ffafb3a27a085032449cd60a68753a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/17c74d8eb31c82401cf3f989170353a9316d246a288a8fc84a26c55d35f4bff9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/17c74d8eb31c82401cf3f989170353a9316d246a288a8fc84a26c55d35f4bff9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/17c74d8eb31c82401cf3f989170353a9316d246a288a8fc84a26c55d35f4bff9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-949294",
	                "Source": "/var/lib/docker/volumes/embed-certs-949294/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-949294",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-949294",
	                "name.minikube.sigs.k8s.io": "embed-certs-949294",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "b3911b3063b901182b9c3ae78e80109ca672f538e5054538cc2eb8d96b6cf713",
	            "SandboxKey": "/var/run/docker/netns/b3911b3063b9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-949294": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7fd9c7914891185e47dacdba5bd1d1c0b9a651e39050d7a01ee422b067e5fad7",
	                    "EndpointID": "6b8b3b6e4b6f17f47bf591f2661234c50c44840efc981abc1c2c939f32fcca2c",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "66:60:2e:2e:ce:71",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-949294",
	                        "86fea694f6d2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-949294 -n embed-certs-949294
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-949294 -n embed-certs-949294: exit status 2 (354.00737ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-949294 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-949294 logs -n 25: (1.146225569s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ stop    │ -p embed-certs-949294 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-949294           │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ addons  │ enable dashboard -p no-preload-026579 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-026579            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ start   │ -p no-preload-026579 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-026579            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:23 UTC │
	│ addons  │ enable metrics-server -p newest-cni-297942 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-297942            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-949294 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-949294           │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ start   │ -p embed-certs-949294 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-949294           │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:23 UTC │
	│ stop    │ -p newest-cni-297942 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-297942            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ addons  │ enable dashboard -p newest-cni-297942 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-297942            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ start   │ -p newest-cni-297942 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-297942            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-178152 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-178152 │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │                     │
	│ image   │ newest-cni-297942 image list --format=json                                                                                                                                                                                                    │ newest-cni-297942            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:22 UTC │
	│ pause   │ -p newest-cni-297942 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-297942            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-178152 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-178152 │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:23 UTC │
	│ delete  │ -p newest-cni-297942                                                                                                                                                                                                                          │ newest-cni-297942            │ jenkins │ v1.37.0 │ 26 Nov 25 20:22 UTC │ 26 Nov 25 20:23 UTC │
	│ delete  │ -p newest-cni-297942                                                                                                                                                                                                                          │ newest-cni-297942            │ jenkins │ v1.37.0 │ 26 Nov 25 20:23 UTC │ 26 Nov 25 20:23 UTC │
	│ start   │ -p auto-825702 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-825702                  │ jenkins │ v1.37.0 │ 26 Nov 25 20:23 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-178152 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-178152 │ jenkins │ v1.37.0 │ 26 Nov 25 20:23 UTC │ 26 Nov 25 20:23 UTC │
	│ start   │ -p default-k8s-diff-port-178152 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-178152 │ jenkins │ v1.37.0 │ 26 Nov 25 20:23 UTC │                     │
	│ image   │ no-preload-026579 image list --format=json                                                                                                                                                                                                    │ no-preload-026579            │ jenkins │ v1.37.0 │ 26 Nov 25 20:23 UTC │ 26 Nov 25 20:23 UTC │
	│ pause   │ -p no-preload-026579 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-026579            │ jenkins │ v1.37.0 │ 26 Nov 25 20:23 UTC │                     │
	│ image   │ embed-certs-949294 image list --format=json                                                                                                                                                                                                   │ embed-certs-949294           │ jenkins │ v1.37.0 │ 26 Nov 25 20:23 UTC │ 26 Nov 25 20:23 UTC │
	│ pause   │ -p embed-certs-949294 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-949294           │ jenkins │ v1.37.0 │ 26 Nov 25 20:23 UTC │                     │
	│ delete  │ -p no-preload-026579                                                                                                                                                                                                                          │ no-preload-026579            │ jenkins │ v1.37.0 │ 26 Nov 25 20:23 UTC │ 26 Nov 25 20:23 UTC │
	│ delete  │ -p no-preload-026579                                                                                                                                                                                                                          │ no-preload-026579            │ jenkins │ v1.37.0 │ 26 Nov 25 20:23 UTC │ 26 Nov 25 20:23 UTC │
	│ start   │ -p kindnet-825702 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio                                                                                                      │ kindnet-825702               │ jenkins │ v1.37.0 │ 26 Nov 25 20:23 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/26 20:23:40
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1126 20:23:40.468966  299373 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:23:40.469078  299373 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:23:40.469092  299373 out.go:374] Setting ErrFile to fd 2...
	I1126 20:23:40.469097  299373 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:23:40.469356  299373 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-10722/.minikube/bin
	I1126 20:23:40.470005  299373 out.go:368] Setting JSON to false
	I1126 20:23:40.471651  299373 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3970,"bootTime":1764184650,"procs":337,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1126 20:23:40.471724  299373 start.go:143] virtualization: kvm guest
	I1126 20:23:40.473655  299373 out.go:179] * [kindnet-825702] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1126 20:23:40.474818  299373 out.go:179]   - MINIKUBE_LOCATION=21974
	I1126 20:23:40.474816  299373 notify.go:221] Checking for updates...
	I1126 20:23:40.476843  299373 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1126 20:23:40.477919  299373 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21974-10722/kubeconfig
	I1126 20:23:40.478914  299373 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-10722/.minikube
	I1126 20:23:40.479906  299373 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1126 20:23:40.480997  299373 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1126 20:23:40.482743  299373 config.go:182] Loaded profile config "auto-825702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:23:40.482917  299373 config.go:182] Loaded profile config "default-k8s-diff-port-178152": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:23:40.483047  299373 config.go:182] Loaded profile config "embed-certs-949294": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:23:40.483192  299373 driver.go:422] Setting default libvirt URI to qemu:///system
	I1126 20:23:40.512119  299373 docker.go:124] docker version: linux-29.0.4:Docker Engine - Community
	I1126 20:23:40.512201  299373 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:23:40.573846  299373 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-26 20:23:40.563293181 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1126 20:23:40.573996  299373 docker.go:319] overlay module found
	I1126 20:23:40.575890  299373 out.go:179] * Using the docker driver based on user configuration
	I1126 20:23:40.576907  299373 start.go:309] selected driver: docker
	I1126 20:23:40.576924  299373 start.go:927] validating driver "docker" against <nil>
	I1126 20:23:40.576937  299373 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1126 20:23:40.577559  299373 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:23:40.637749  299373 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-26 20:23:40.628249184 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1126 20:23:40.637960  299373 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1126 20:23:40.638223  299373 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1126 20:23:40.639948  299373 out.go:179] * Using Docker driver with root privileges
	I1126 20:23:40.640994  299373 cni.go:84] Creating CNI manager for "kindnet"
	I1126 20:23:40.641013  299373 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1126 20:23:40.641094  299373 start.go:353] cluster config:
	{Name:kindnet-825702 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-825702 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:23:40.642246  299373 out.go:179] * Starting "kindnet-825702" primary control-plane node in "kindnet-825702" cluster
	I1126 20:23:40.644224  299373 cache.go:134] Beginning downloading kic base image for docker with crio
	I1126 20:23:40.645334  299373 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1126 20:23:40.646348  299373 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:23:40.646386  299373 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21974-10722/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1126 20:23:40.646405  299373 cache.go:65] Caching tarball of preloaded images
	I1126 20:23:40.646437  299373 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1126 20:23:40.646530  299373 preload.go:238] Found /home/jenkins/minikube-integration/21974-10722/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1126 20:23:40.646543  299373 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1126 20:23:40.646662  299373 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kindnet-825702/config.json ...
	I1126 20:23:40.646685  299373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kindnet-825702/config.json: {Name:mk86932abc2e48a231ae6431f4fd1bf125690490 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:23:40.670569  299373 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1126 20:23:40.670595  299373 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1126 20:23:40.670614  299373 cache.go:243] Successfully downloaded all kic artifacts
	I1126 20:23:40.670646  299373 start.go:360] acquireMachinesLock for kindnet-825702: {Name:mkc15470e56d57d2409f69f1b26f61a041f37093 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1126 20:23:40.670750  299373 start.go:364] duration metric: took 84.628µs to acquireMachinesLock for "kindnet-825702"
	I1126 20:23:40.670778  299373 start.go:93] Provisioning new machine with config: &{Name:kindnet-825702 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-825702 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1126 20:23:40.670886  299373 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Nov 26 20:23:08 embed-certs-949294 crio[564]: time="2025-11-26T20:23:08.140292229Z" level=info msg="Started container" PID=1741 containerID=020ae747188435739a619b6aad1c7ab4f360ef35d0eeb58db07dbc9a8d1f40c5 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lz5p9/dashboard-metrics-scraper id=1661ef3e-e7d7-428b-98ce-ff29e1fdb991 name=/runtime.v1.RuntimeService/StartContainer sandboxID=32f7ccb83cb61788747d40160ba4dbe5377423417cb8388ccb0197c805fdd8ba
	Nov 26 20:23:08 embed-certs-949294 crio[564]: time="2025-11-26T20:23:08.189201194Z" level=info msg="Removing container: d1439c7b5d56bfbe046ea963b327db2e10f16f89500515bb1f9347b76aa256bc" id=598784d1-840e-4118-9808-00e9f88a2a82 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 26 20:23:08 embed-certs-949294 crio[564]: time="2025-11-26T20:23:08.220061688Z" level=info msg="Removed container d1439c7b5d56bfbe046ea963b327db2e10f16f89500515bb1f9347b76aa256bc: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lz5p9/dashboard-metrics-scraper" id=598784d1-840e-4118-9808-00e9f88a2a82 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 26 20:23:18 embed-certs-949294 crio[564]: time="2025-11-26T20:23:18.224006258Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a88eede4-24fd-48bb-b693-73f1db0d33f5 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:23:18 embed-certs-949294 crio[564]: time="2025-11-26T20:23:18.226259529Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=01edc010-9cd5-43e5-b4df-991b1ccb9aa4 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:23:18 embed-certs-949294 crio[564]: time="2025-11-26T20:23:18.22818816Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=5f095170-c08a-4dd6-bfdd-30781522c0f5 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:23:18 embed-certs-949294 crio[564]: time="2025-11-26T20:23:18.228309158Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:23:18 embed-certs-949294 crio[564]: time="2025-11-26T20:23:18.235136324Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:23:18 embed-certs-949294 crio[564]: time="2025-11-26T20:23:18.235332023Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/8b152c78016a55cdfb17358d9166992b5a8c4555d0390b0dc22c5b95740230c2/merged/etc/passwd: no such file or directory"
	Nov 26 20:23:18 embed-certs-949294 crio[564]: time="2025-11-26T20:23:18.235469733Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/8b152c78016a55cdfb17358d9166992b5a8c4555d0390b0dc22c5b95740230c2/merged/etc/group: no such file or directory"
	Nov 26 20:23:18 embed-certs-949294 crio[564]: time="2025-11-26T20:23:18.235867315Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:23:18 embed-certs-949294 crio[564]: time="2025-11-26T20:23:18.261876141Z" level=info msg="Created container ae31bf0fea5ac585f473d8f059b10f05dfe486cea4008bbd9381863883dfa02f: kube-system/storage-provisioner/storage-provisioner" id=5f095170-c08a-4dd6-bfdd-30781522c0f5 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:23:18 embed-certs-949294 crio[564]: time="2025-11-26T20:23:18.262435215Z" level=info msg="Starting container: ae31bf0fea5ac585f473d8f059b10f05dfe486cea4008bbd9381863883dfa02f" id=6dbbe136-74c2-4d9d-acc8-b3ce74bfdc3e name=/runtime.v1.RuntimeService/StartContainer
	Nov 26 20:23:18 embed-certs-949294 crio[564]: time="2025-11-26T20:23:18.264508204Z" level=info msg="Started container" PID=1755 containerID=ae31bf0fea5ac585f473d8f059b10f05dfe486cea4008bbd9381863883dfa02f description=kube-system/storage-provisioner/storage-provisioner id=6dbbe136-74c2-4d9d-acc8-b3ce74bfdc3e name=/runtime.v1.RuntimeService/StartContainer sandboxID=ab3ec70812556a26a2cd10250abf4e2273aa7627a85e7f389f353386a314d3eb
	Nov 26 20:23:30 embed-certs-949294 crio[564]: time="2025-11-26T20:23:30.08888312Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=10f19a12-718b-4079-9b41-fa2c3c5c26fa name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:23:30 embed-certs-949294 crio[564]: time="2025-11-26T20:23:30.090201465Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=dc1a5c3b-5f75-4b04-a302-743d938c8946 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:23:30 embed-certs-949294 crio[564]: time="2025-11-26T20:23:30.091318552Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lz5p9/dashboard-metrics-scraper" id=4174d35a-03a6-423e-8ea3-5b48858f8ca1 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:23:30 embed-certs-949294 crio[564]: time="2025-11-26T20:23:30.091446643Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:23:30 embed-certs-949294 crio[564]: time="2025-11-26T20:23:30.100087787Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:23:30 embed-certs-949294 crio[564]: time="2025-11-26T20:23:30.100825465Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:23:30 embed-certs-949294 crio[564]: time="2025-11-26T20:23:30.143627036Z" level=info msg="Created container b05705dc359bb9d69e6e46ca5d20f8b41a1d58b1b5bdb3bb6aad951eb6e902a2: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lz5p9/dashboard-metrics-scraper" id=4174d35a-03a6-423e-8ea3-5b48858f8ca1 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:23:30 embed-certs-949294 crio[564]: time="2025-11-26T20:23:30.144258632Z" level=info msg="Starting container: b05705dc359bb9d69e6e46ca5d20f8b41a1d58b1b5bdb3bb6aad951eb6e902a2" id=38a7ba88-cb4b-4a03-a664-14bfebf30573 name=/runtime.v1.RuntimeService/StartContainer
	Nov 26 20:23:30 embed-certs-949294 crio[564]: time="2025-11-26T20:23:30.146687409Z" level=info msg="Started container" PID=1789 containerID=b05705dc359bb9d69e6e46ca5d20f8b41a1d58b1b5bdb3bb6aad951eb6e902a2 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lz5p9/dashboard-metrics-scraper id=38a7ba88-cb4b-4a03-a664-14bfebf30573 name=/runtime.v1.RuntimeService/StartContainer sandboxID=32f7ccb83cb61788747d40160ba4dbe5377423417cb8388ccb0197c805fdd8ba
	Nov 26 20:23:30 embed-certs-949294 crio[564]: time="2025-11-26T20:23:30.267541083Z" level=info msg="Removing container: 020ae747188435739a619b6aad1c7ab4f360ef35d0eeb58db07dbc9a8d1f40c5" id=368bc918-c870-422e-a070-add8b2a59b21 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 26 20:23:30 embed-certs-949294 crio[564]: time="2025-11-26T20:23:30.277904129Z" level=info msg="Removed container 020ae747188435739a619b6aad1c7ab4f360ef35d0eeb58db07dbc9a8d1f40c5: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lz5p9/dashboard-metrics-scraper" id=368bc918-c870-422e-a070-add8b2a59b21 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	b05705dc359bb       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           11 seconds ago      Exited              dashboard-metrics-scraper   3                   32f7ccb83cb61       dashboard-metrics-scraper-6ffb444bf9-lz5p9   kubernetes-dashboard
	ae31bf0fea5ac       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           23 seconds ago      Running             storage-provisioner         1                   ab3ec70812556       storage-provisioner                          kube-system
	3a8d799bf870a       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   43 seconds ago      Running             kubernetes-dashboard        0                   afbd06de88e28       kubernetes-dashboard-855c9754f9-8dsr7        kubernetes-dashboard
	93176a1ab732c       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           54 seconds ago      Running             busybox                     1                   fb11f0e1300b7       busybox                                      default
	cfe8d29a25b15       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           54 seconds ago      Running             coredns                     0                   e6bede0dbd68e       coredns-66bc5c9577-s8rrr                     kube-system
	c8864fe978873       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           54 seconds ago      Exited              storage-provisioner         0                   ab3ec70812556       storage-provisioner                          kube-system
	be832c3cf0e3e       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           54 seconds ago      Running             kube-proxy                  0                   fc0a7c56888bc       kube-proxy-qnjvr                             kube-system
	365c78f7ca471       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           54 seconds ago      Running             kindnet-cni                 0                   546338848a0b9       kindnet-9546l                                kube-system
	d0edb51e9e5fc       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           57 seconds ago      Running             kube-apiserver              0                   d438afad05edc       kube-apiserver-embed-certs-949294            kube-system
	1247796ab1281       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           57 seconds ago      Running             kube-scheduler              0                   75490021d8a88       kube-scheduler-embed-certs-949294            kube-system
	f265ea81a0961       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           57 seconds ago      Running             kube-controller-manager     0                   1f2a6a52b674a       kube-controller-manager-embed-certs-949294   kube-system
	27f1868f6da52       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           57 seconds ago      Running             etcd                        0                   cf4385ce2aa61       etcd-embed-certs-949294                      kube-system
	
	
	==> coredns [cfe8d29a25b15896fe250c2367ea5bf40c17ce3f9aa972b6a3000afe2cea2ba4] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] 127.0.0.1:35918 - 19879 "HINFO IN 4178472970200051923.8652175570046282908. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.456862561s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-949294
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-949294
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970
	                    minikube.k8s.io/name=embed-certs-949294
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_26T20_21_49_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 26 Nov 2025 20:21:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-949294
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 26 Nov 2025 20:23:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 26 Nov 2025 20:23:17 +0000   Wed, 26 Nov 2025 20:21:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 26 Nov 2025 20:23:17 +0000   Wed, 26 Nov 2025 20:21:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 26 Nov 2025 20:23:17 +0000   Wed, 26 Nov 2025 20:21:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 26 Nov 2025 20:23:17 +0000   Wed, 26 Nov 2025 20:22:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-949294
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                aa80874f-b877-4d80-93ab-b99d96f2b5aa
	  Boot ID:                    4ab83601-3d95-4490-96bc-14b416ba2714
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-s8rrr                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     107s
	  kube-system                 etcd-embed-certs-949294                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         113s
	  kube-system                 kindnet-9546l                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-embed-certs-949294             250m (3%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-controller-manager-embed-certs-949294    200m (2%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-proxy-qnjvr                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-embed-certs-949294             100m (1%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-lz5p9    0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-8dsr7         0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 106s               kube-proxy       
	  Normal  Starting                 54s                kube-proxy       
	  Normal  NodeHasSufficientMemory  113s               kubelet          Node embed-certs-949294 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    113s               kubelet          Node embed-certs-949294 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     113s               kubelet          Node embed-certs-949294 status is now: NodeHasSufficientPID
	  Normal  Starting                 113s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           108s               node-controller  Node embed-certs-949294 event: Registered Node embed-certs-949294 in Controller
	  Normal  NodeReady                96s                kubelet          Node embed-certs-949294 status is now: NodeReady
	  Normal  Starting                 57s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  57s (x8 over 57s)  kubelet          Node embed-certs-949294 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x8 over 57s)  kubelet          Node embed-certs-949294 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x8 over 57s)  kubelet          Node embed-certs-949294 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           51s                node-controller  Node embed-certs-949294 event: Registered Node embed-certs-949294 in Controller
	
	
	==> dmesg <==
	[  +0.091832] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023778] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.010162] kauditd_printk_skb: 47 callbacks suppressed
	[Nov26 19:37] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.011177] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.022896] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.024869] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.022915] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.023864] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +2.047793] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +4.032568] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +8.126198] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[ +16.382331] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[Nov26 19:38] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	
	
	==> etcd [27f1868f6da521ee9bf27fc5eccba3b561b07052f526f759dfbebcc382b682e0] <==
	{"level":"warn","ts":"2025-11-26T20:22:45.931881Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:45.938438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:45.945428Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:45.954308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:45.961834Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:45.970105Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:45.977542Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:45.986273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:45.995905Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:46.015109Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:46.026256Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:46.048824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:46.056401Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:46.065080Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:46.071709Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:46.079246Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:46.091828Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:46.099869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:46.108086Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:46.123919Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:46.130128Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:46.136454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:22:46.182308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:23:07.323522Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"117.109827ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-s8rrr\" limit:1 ","response":"range_response_count:1 size:5934"}
	{"level":"info","ts":"2025-11-26T20:23:07.323628Z","caller":"traceutil/trace.go:172","msg":"trace[1741218818] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-s8rrr; range_end:; response_count:1; response_revision:606; }","duration":"117.234691ms","start":"2025-11-26T20:23:07.206378Z","end":"2025-11-26T20:23:07.323613Z","steps":["trace[1741218818] 'range keys from in-memory index tree'  (duration: 116.958118ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:23:41 up  1:06,  0 user,  load average: 3.34, 3.17, 2.14
	Linux embed-certs-949294 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [365c78f7ca47179e6d1e36f011f96db8f3f25d8d01ed886707e3e02c4beb2040] <==
	I1126 20:22:47.690921       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1126 20:22:47.691171       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1126 20:22:47.691313       1 main.go:148] setting mtu 1500 for CNI 
	I1126 20:22:47.691326       1 main.go:178] kindnetd IP family: "ipv4"
	I1126 20:22:47.691349       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-26T20:22:47Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1126 20:22:47.895696       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1126 20:22:47.895723       1 controller.go:381] "Waiting for informer caches to sync"
	I1126 20:22:47.895737       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1126 20:22:47.895867       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1126 20:22:48.196991       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1126 20:22:48.197027       1 metrics.go:72] Registering metrics
	I1126 20:22:48.197113       1 controller.go:711] "Syncing nftables rules"
	I1126 20:22:57.895597       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1126 20:22:57.895686       1 main.go:301] handling current node
	I1126 20:23:07.898533       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1126 20:23:07.898567       1 main.go:301] handling current node
	I1126 20:23:17.895200       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1126 20:23:17.895227       1 main.go:301] handling current node
	I1126 20:23:27.898557       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1126 20:23:27.898593       1 main.go:301] handling current node
	I1126 20:23:37.895630       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1126 20:23:37.895656       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d0edb51e9e5fccff0dc7134b628a507303eb3f0ea693960b2ef07c819ccfcfb3] <==
	I1126 20:22:46.865107       1 autoregister_controller.go:144] Starting autoregister controller
	I1126 20:22:46.865128       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1126 20:22:46.865142       1 cache.go:39] Caches are synced for autoregister controller
	I1126 20:22:46.867391       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1126 20:22:46.874577       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	E1126 20:22:46.879297       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1126 20:22:46.895881       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1126 20:22:46.906037       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1126 20:22:46.906072       1 policy_source.go:240] refreshing policies
	I1126 20:22:46.906742       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1126 20:22:46.945271       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1126 20:22:46.945368       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1126 20:22:46.945387       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1126 20:22:46.952973       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1126 20:22:47.189013       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1126 20:22:47.243758       1 controller.go:667] quota admission added evaluator for: namespaces
	I1126 20:22:47.377424       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1126 20:22:47.407615       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1126 20:22:47.424757       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1126 20:22:47.523942       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.67.8"}
	I1126 20:22:47.539945       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.175.98"}
	I1126 20:22:47.751441       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1126 20:22:50.474497       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1126 20:22:50.672414       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1126 20:22:50.770861       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [f265ea81a09610c82177779173f228ed110d405daaad945e9224abda5afc655e] <==
	I1126 20:22:50.221408       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1126 20:22:50.221438       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1126 20:22:50.221505       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1126 20:22:50.221809       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1126 20:22:50.223742       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1126 20:22:50.223796       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1126 20:22:50.223814       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1126 20:22:50.224946       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1126 20:22:50.227542       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1126 20:22:50.230600       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1126 20:22:50.254293       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1126 20:22:50.259955       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1126 20:22:50.264420       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1126 20:22:50.268739       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1126 20:22:50.269964       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1126 20:22:50.269982       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1126 20:22:50.269991       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1126 20:22:50.270779       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1126 20:22:50.274814       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1126 20:22:50.278514       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1126 20:22:50.283514       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1126 20:22:50.283662       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1126 20:22:50.283794       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-949294"
	I1126 20:22:50.283882       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1126 20:22:50.319028       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	
	
	==> kube-proxy [be832c3cf0e3ebddcc1aa2777d57e2d38d2836d83f9be1d7f3aeba656fd95381] <==
	I1126 20:22:47.540724       1 server_linux.go:53] "Using iptables proxy"
	I1126 20:22:47.613580       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1126 20:22:47.714551       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1126 20:22:47.714591       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1126 20:22:47.714687       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1126 20:22:47.734522       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1126 20:22:47.734597       1 server_linux.go:132] "Using iptables Proxier"
	I1126 20:22:47.740993       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1126 20:22:47.741490       1 server.go:527] "Version info" version="v1.34.1"
	I1126 20:22:47.741525       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 20:22:47.745258       1 config.go:309] "Starting node config controller"
	I1126 20:22:47.745324       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1126 20:22:47.745431       1 config.go:403] "Starting serviceCIDR config controller"
	I1126 20:22:47.745504       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1126 20:22:47.745780       1 config.go:200] "Starting service config controller"
	I1126 20:22:47.745803       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1126 20:22:47.745841       1 config.go:106] "Starting endpoint slice config controller"
	I1126 20:22:47.745855       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1126 20:22:47.845974       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1126 20:22:47.845970       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1126 20:22:47.846018       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1126 20:22:47.846029       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1247796ab1281fb5ee5525e146ff9c03a80b5e36662073b88f7b4335b21630fa] <==
	I1126 20:22:45.338294       1 serving.go:386] Generated self-signed cert in-memory
	W1126 20:22:46.809208       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1126 20:22:46.809260       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1126 20:22:46.809272       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1126 20:22:46.809281       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1126 20:22:46.865537       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1126 20:22:46.865570       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 20:22:46.870823       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1126 20:22:46.874677       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1126 20:22:46.871605       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1126 20:22:46.871645       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1126 20:22:46.975041       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 26 20:22:55 embed-certs-949294 kubelet[730]: I1126 20:22:55.154257     730 scope.go:117] "RemoveContainer" containerID="d1439c7b5d56bfbe046ea963b327db2e10f16f89500515bb1f9347b76aa256bc"
	Nov 26 20:22:55 embed-certs-949294 kubelet[730]: E1126 20:22:55.155126     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lz5p9_kubernetes-dashboard(527424d4-133e-4351-b73f-c777f0d44483)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lz5p9" podUID="527424d4-133e-4351-b73f-c777f0d44483"
	Nov 26 20:22:56 embed-certs-949294 kubelet[730]: I1126 20:22:56.157322     730 scope.go:117] "RemoveContainer" containerID="d1439c7b5d56bfbe046ea963b327db2e10f16f89500515bb1f9347b76aa256bc"
	Nov 26 20:22:56 embed-certs-949294 kubelet[730]: E1126 20:22:56.157545     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lz5p9_kubernetes-dashboard(527424d4-133e-4351-b73f-c777f0d44483)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lz5p9" podUID="527424d4-133e-4351-b73f-c777f0d44483"
	Nov 26 20:22:57 embed-certs-949294 kubelet[730]: I1126 20:22:57.159438     730 scope.go:117] "RemoveContainer" containerID="d1439c7b5d56bfbe046ea963b327db2e10f16f89500515bb1f9347b76aa256bc"
	Nov 26 20:22:57 embed-certs-949294 kubelet[730]: E1126 20:22:57.159695     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lz5p9_kubernetes-dashboard(527424d4-133e-4351-b73f-c777f0d44483)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lz5p9" podUID="527424d4-133e-4351-b73f-c777f0d44483"
	Nov 26 20:22:59 embed-certs-949294 kubelet[730]: I1126 20:22:59.178307     730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8dsr7" podStartSLOduration=2.193407743 podStartE2EDuration="9.178288191s" podCreationTimestamp="2025-11-26 20:22:50 +0000 UTC" firstStartedPulling="2025-11-26 20:22:51.207850348 +0000 UTC m=+7.206058738" lastFinishedPulling="2025-11-26 20:22:58.192730787 +0000 UTC m=+14.190939186" observedRunningTime="2025-11-26 20:22:59.178110237 +0000 UTC m=+15.176318669" watchObservedRunningTime="2025-11-26 20:22:59.178288191 +0000 UTC m=+15.176496600"
	Nov 26 20:23:08 embed-certs-949294 kubelet[730]: I1126 20:23:08.088253     730 scope.go:117] "RemoveContainer" containerID="d1439c7b5d56bfbe046ea963b327db2e10f16f89500515bb1f9347b76aa256bc"
	Nov 26 20:23:08 embed-certs-949294 kubelet[730]: I1126 20:23:08.187814     730 scope.go:117] "RemoveContainer" containerID="d1439c7b5d56bfbe046ea963b327db2e10f16f89500515bb1f9347b76aa256bc"
	Nov 26 20:23:08 embed-certs-949294 kubelet[730]: I1126 20:23:08.188005     730 scope.go:117] "RemoveContainer" containerID="020ae747188435739a619b6aad1c7ab4f360ef35d0eeb58db07dbc9a8d1f40c5"
	Nov 26 20:23:08 embed-certs-949294 kubelet[730]: E1126 20:23:08.188224     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lz5p9_kubernetes-dashboard(527424d4-133e-4351-b73f-c777f0d44483)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lz5p9" podUID="527424d4-133e-4351-b73f-c777f0d44483"
	Nov 26 20:23:16 embed-certs-949294 kubelet[730]: I1126 20:23:16.055938     730 scope.go:117] "RemoveContainer" containerID="020ae747188435739a619b6aad1c7ab4f360ef35d0eeb58db07dbc9a8d1f40c5"
	Nov 26 20:23:16 embed-certs-949294 kubelet[730]: E1126 20:23:16.056141     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lz5p9_kubernetes-dashboard(527424d4-133e-4351-b73f-c777f0d44483)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lz5p9" podUID="527424d4-133e-4351-b73f-c777f0d44483"
	Nov 26 20:23:18 embed-certs-949294 kubelet[730]: I1126 20:23:18.223232     730 scope.go:117] "RemoveContainer" containerID="c8864fe9788731d2ce2001c0936defdb11739fcb35cd62e407e708947119748c"
	Nov 26 20:23:30 embed-certs-949294 kubelet[730]: I1126 20:23:30.088364     730 scope.go:117] "RemoveContainer" containerID="020ae747188435739a619b6aad1c7ab4f360ef35d0eeb58db07dbc9a8d1f40c5"
	Nov 26 20:23:30 embed-certs-949294 kubelet[730]: I1126 20:23:30.266206     730 scope.go:117] "RemoveContainer" containerID="020ae747188435739a619b6aad1c7ab4f360ef35d0eeb58db07dbc9a8d1f40c5"
	Nov 26 20:23:30 embed-certs-949294 kubelet[730]: I1126 20:23:30.266386     730 scope.go:117] "RemoveContainer" containerID="b05705dc359bb9d69e6e46ca5d20f8b41a1d58b1b5bdb3bb6aad951eb6e902a2"
	Nov 26 20:23:30 embed-certs-949294 kubelet[730]: E1126 20:23:30.266624     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lz5p9_kubernetes-dashboard(527424d4-133e-4351-b73f-c777f0d44483)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lz5p9" podUID="527424d4-133e-4351-b73f-c777f0d44483"
	Nov 26 20:23:36 embed-certs-949294 kubelet[730]: I1126 20:23:36.055535     730 scope.go:117] "RemoveContainer" containerID="b05705dc359bb9d69e6e46ca5d20f8b41a1d58b1b5bdb3bb6aad951eb6e902a2"
	Nov 26 20:23:36 embed-certs-949294 kubelet[730]: E1126 20:23:36.055777     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lz5p9_kubernetes-dashboard(527424d4-133e-4351-b73f-c777f0d44483)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lz5p9" podUID="527424d4-133e-4351-b73f-c777f0d44483"
	Nov 26 20:23:36 embed-certs-949294 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 26 20:23:36 embed-certs-949294 kubelet[730]: I1126 20:23:36.803886     730 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 26 20:23:36 embed-certs-949294 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 26 20:23:36 embed-certs-949294 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 26 20:23:36 embed-certs-949294 systemd[1]: kubelet.service: Consumed 1.689s CPU time.
	
	
	==> kubernetes-dashboard [3a8d799bf870a2ddb08ba11d6e10375455a0f1bff23f82425087bf768c717932] <==
	2025/11/26 20:22:58 Using namespace: kubernetes-dashboard
	2025/11/26 20:22:58 Using in-cluster config to connect to apiserver
	2025/11/26 20:22:58 Using secret token for csrf signing
	2025/11/26 20:22:58 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/26 20:22:58 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/26 20:22:58 Successful initial request to the apiserver, version: v1.34.1
	2025/11/26 20:22:58 Generating JWE encryption key
	2025/11/26 20:22:58 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/26 20:22:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/26 20:22:58 Initializing JWE encryption key from synchronized object
	2025/11/26 20:22:58 Creating in-cluster Sidecar client
	2025/11/26 20:22:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/26 20:22:58 Serving insecurely on HTTP port: 9090
	2025/11/26 20:23:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/26 20:22:58 Starting overwatch
	
	
	==> storage-provisioner [ae31bf0fea5ac585f473d8f059b10f05dfe486cea4008bbd9381863883dfa02f] <==
	I1126 20:23:18.278538       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1126 20:23:18.287248       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1126 20:23:18.287302       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1126 20:23:18.289200       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:23:21.744382       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:23:26.005422       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:23:29.604913       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:23:32.659106       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:23:35.681820       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:23:35.686076       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1126 20:23:35.686248       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1126 20:23:35.686417       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-949294_07f263f2-0bfd-4721-a0c3-f0f3e2b5a8ca!
	I1126 20:23:35.686425       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"05f20356-e266-4bee-9af8-d671ea0ca424", APIVersion:"v1", ResourceVersion:"639", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-949294_07f263f2-0bfd-4721-a0c3-f0f3e2b5a8ca became leader
	W1126 20:23:35.688080       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:23:35.692089       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1126 20:23:35.786646       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-949294_07f263f2-0bfd-4721-a0c3-f0f3e2b5a8ca!
	W1126 20:23:37.695245       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:23:37.700410       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:23:39.703540       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:23:39.708039       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:23:41.711661       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:23:41.716176       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [c8864fe9788731d2ce2001c0936defdb11739fcb35cd62e407e708947119748c] <==
	I1126 20:22:47.516378       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1126 20:23:17.521603       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-949294 -n embed-certs-949294
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-949294 -n embed-certs-949294: exit status 2 (341.420207ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-949294 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (6.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (7.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-178152 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-178152 --alsologtostderr -v=1: exit status 80 (2.540208035s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-178152 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1126 20:24:14.479807  310059 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:24:14.480297  310059 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:24:14.480308  310059 out.go:374] Setting ErrFile to fd 2...
	I1126 20:24:14.480315  310059 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:24:14.480633  310059 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-10722/.minikube/bin
	I1126 20:24:14.480936  310059 out.go:368] Setting JSON to false
	I1126 20:24:14.480955  310059 mustload.go:66] Loading cluster: default-k8s-diff-port-178152
	I1126 20:24:14.481536  310059 config.go:182] Loaded profile config "default-k8s-diff-port-178152": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:24:14.482146  310059 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-178152 --format={{.State.Status}}
	I1126 20:24:14.508913  310059 host.go:66] Checking if "default-k8s-diff-port-178152" exists ...
	I1126 20:24:14.509342  310059 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:24:14.583185  310059 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:86 SystemTime:2025-11-26 20:24:14.570654208 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1126 20:24:14.584004  310059 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-178152 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1126 20:24:14.586034  310059 out.go:179] * Pausing node default-k8s-diff-port-178152 ... 
	I1126 20:24:14.587128  310059 host.go:66] Checking if "default-k8s-diff-port-178152" exists ...
	I1126 20:24:14.587447  310059 ssh_runner.go:195] Run: systemctl --version
	I1126 20:24:14.587522  310059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-178152
	I1126 20:24:14.609425  310059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/default-k8s-diff-port-178152/id_rsa Username:docker}
	I1126 20:24:14.715876  310059 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:24:14.732016  310059 pause.go:52] kubelet running: true
	I1126 20:24:14.732072  310059 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1126 20:24:14.962593  310059 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1126 20:24:14.962705  310059 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1126 20:24:15.047331  310059 cri.go:89] found id: "4a7a55586fdacfafe83753690495f1e47b964f7c04ba8d593fac2ce9f15e9dd8"
	I1126 20:24:15.047399  310059 cri.go:89] found id: "b5b54b11bd45b447bfaecefe94487f516b269bf58598eb7dcfa18af5edc1612e"
	I1126 20:24:15.047405  310059 cri.go:89] found id: "5f9228ddca10228c8bcb4f8b1d2167718f80e4d928f6cbae8a8ee24504ece49e"
	I1126 20:24:15.047411  310059 cri.go:89] found id: "ca65c52a7e15d14f87ed86b2027a3434dde86bb3c0239a70ce563e361f2bf410"
	I1126 20:24:15.047415  310059 cri.go:89] found id: "e8ac49bc740f47f5f32f520593ad8e80e3fef725e3ab711117dd927d45c20eb3"
	I1126 20:24:15.047420  310059 cri.go:89] found id: "851ab28993a8b771e5c0a2f66a99e6f22fbce78ae80259d843eb935540e459cb"
	I1126 20:24:15.047425  310059 cri.go:89] found id: "cd9d1e44673562de24369667807437c6a3c635356a7d55a049742bc674b8d8bb"
	I1126 20:24:15.047429  310059 cri.go:89] found id: "45aa87e14b73bdfe289340af54adbd31ea4129fb3bbd2a635ed0f95587efea73"
	I1126 20:24:15.047434  310059 cri.go:89] found id: "53d64e031f1a8bdf5d5b01443fbdf38f958b0ba7ea8c6514663f714eb5cedebf"
	I1126 20:24:15.047447  310059 cri.go:89] found id: "32fb436af28ae1c55272f8ad63af213d1dbd9affbdb2dbe7945dedf2c076fc21"
	I1126 20:24:15.047475  310059 cri.go:89] found id: "a2ca658bde7be15ad908300df3903014b37b72b558d8856ce799024e45c11ee0"
	I1126 20:24:15.047481  310059 cri.go:89] found id: ""
	I1126 20:24:15.047526  310059 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 20:24:15.062490  310059 retry.go:31] will retry after 195.148551ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:24:15Z" level=error msg="open /run/runc: no such file or directory"
	I1126 20:24:15.257768  310059 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:24:15.274894  310059 pause.go:52] kubelet running: false
	I1126 20:24:15.274948  310059 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1126 20:24:15.473967  310059 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1126 20:24:15.474048  310059 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1126 20:24:15.553188  310059 cri.go:89] found id: "4a7a55586fdacfafe83753690495f1e47b964f7c04ba8d593fac2ce9f15e9dd8"
	I1126 20:24:15.553215  310059 cri.go:89] found id: "b5b54b11bd45b447bfaecefe94487f516b269bf58598eb7dcfa18af5edc1612e"
	I1126 20:24:15.553219  310059 cri.go:89] found id: "5f9228ddca10228c8bcb4f8b1d2167718f80e4d928f6cbae8a8ee24504ece49e"
	I1126 20:24:15.553223  310059 cri.go:89] found id: "ca65c52a7e15d14f87ed86b2027a3434dde86bb3c0239a70ce563e361f2bf410"
	I1126 20:24:15.553225  310059 cri.go:89] found id: "e8ac49bc740f47f5f32f520593ad8e80e3fef725e3ab711117dd927d45c20eb3"
	I1126 20:24:15.553229  310059 cri.go:89] found id: "851ab28993a8b771e5c0a2f66a99e6f22fbce78ae80259d843eb935540e459cb"
	I1126 20:24:15.553231  310059 cri.go:89] found id: "cd9d1e44673562de24369667807437c6a3c635356a7d55a049742bc674b8d8bb"
	I1126 20:24:15.553234  310059 cri.go:89] found id: "45aa87e14b73bdfe289340af54adbd31ea4129fb3bbd2a635ed0f95587efea73"
	I1126 20:24:15.553237  310059 cri.go:89] found id: "53d64e031f1a8bdf5d5b01443fbdf38f958b0ba7ea8c6514663f714eb5cedebf"
	I1126 20:24:15.553251  310059 cri.go:89] found id: "32fb436af28ae1c55272f8ad63af213d1dbd9affbdb2dbe7945dedf2c076fc21"
	I1126 20:24:15.553255  310059 cri.go:89] found id: "a2ca658bde7be15ad908300df3903014b37b72b558d8856ce799024e45c11ee0"
	I1126 20:24:15.553260  310059 cri.go:89] found id: ""
	I1126 20:24:15.553302  310059 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 20:24:15.573205  310059 retry.go:31] will retry after 312.640207ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:24:15Z" level=error msg="open /run/runc: no such file or directory"
	I1126 20:24:15.886780  310059 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:24:15.900604  310059 pause.go:52] kubelet running: false
	I1126 20:24:15.900649  310059 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1126 20:24:16.046515  310059 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1126 20:24:16.046594  310059 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1126 20:24:16.115956  310059 cri.go:89] found id: "4a7a55586fdacfafe83753690495f1e47b964f7c04ba8d593fac2ce9f15e9dd8"
	I1126 20:24:16.115977  310059 cri.go:89] found id: "b5b54b11bd45b447bfaecefe94487f516b269bf58598eb7dcfa18af5edc1612e"
	I1126 20:24:16.115982  310059 cri.go:89] found id: "5f9228ddca10228c8bcb4f8b1d2167718f80e4d928f6cbae8a8ee24504ece49e"
	I1126 20:24:16.115987  310059 cri.go:89] found id: "ca65c52a7e15d14f87ed86b2027a3434dde86bb3c0239a70ce563e361f2bf410"
	I1126 20:24:16.115991  310059 cri.go:89] found id: "e8ac49bc740f47f5f32f520593ad8e80e3fef725e3ab711117dd927d45c20eb3"
	I1126 20:24:16.115996  310059 cri.go:89] found id: "851ab28993a8b771e5c0a2f66a99e6f22fbce78ae80259d843eb935540e459cb"
	I1126 20:24:16.116000  310059 cri.go:89] found id: "cd9d1e44673562de24369667807437c6a3c635356a7d55a049742bc674b8d8bb"
	I1126 20:24:16.116004  310059 cri.go:89] found id: "45aa87e14b73bdfe289340af54adbd31ea4129fb3bbd2a635ed0f95587efea73"
	I1126 20:24:16.116009  310059 cri.go:89] found id: "53d64e031f1a8bdf5d5b01443fbdf38f958b0ba7ea8c6514663f714eb5cedebf"
	I1126 20:24:16.116016  310059 cri.go:89] found id: "32fb436af28ae1c55272f8ad63af213d1dbd9affbdb2dbe7945dedf2c076fc21"
	I1126 20:24:16.116020  310059 cri.go:89] found id: "a2ca658bde7be15ad908300df3903014b37b72b558d8856ce799024e45c11ee0"
	I1126 20:24:16.116024  310059 cri.go:89] found id: ""
	I1126 20:24:16.116063  310059 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 20:24:16.128618  310059 retry.go:31] will retry after 416.138999ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:24:16Z" level=error msg="open /run/runc: no such file or directory"
	I1126 20:24:16.545200  310059 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:24:16.558836  310059 pause.go:52] kubelet running: false
	I1126 20:24:16.558904  310059 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1126 20:24:16.719171  310059 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1126 20:24:16.719245  310059 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1126 20:24:16.816519  310059 cri.go:89] found id: "4a7a55586fdacfafe83753690495f1e47b964f7c04ba8d593fac2ce9f15e9dd8"
	I1126 20:24:16.816545  310059 cri.go:89] found id: "b5b54b11bd45b447bfaecefe94487f516b269bf58598eb7dcfa18af5edc1612e"
	I1126 20:24:16.816551  310059 cri.go:89] found id: "5f9228ddca10228c8bcb4f8b1d2167718f80e4d928f6cbae8a8ee24504ece49e"
	I1126 20:24:16.816556  310059 cri.go:89] found id: "ca65c52a7e15d14f87ed86b2027a3434dde86bb3c0239a70ce563e361f2bf410"
	I1126 20:24:16.816560  310059 cri.go:89] found id: "e8ac49bc740f47f5f32f520593ad8e80e3fef725e3ab711117dd927d45c20eb3"
	I1126 20:24:16.816564  310059 cri.go:89] found id: "851ab28993a8b771e5c0a2f66a99e6f22fbce78ae80259d843eb935540e459cb"
	I1126 20:24:16.816569  310059 cri.go:89] found id: "cd9d1e44673562de24369667807437c6a3c635356a7d55a049742bc674b8d8bb"
	I1126 20:24:16.816575  310059 cri.go:89] found id: "45aa87e14b73bdfe289340af54adbd31ea4129fb3bbd2a635ed0f95587efea73"
	I1126 20:24:16.816580  310059 cri.go:89] found id: "53d64e031f1a8bdf5d5b01443fbdf38f958b0ba7ea8c6514663f714eb5cedebf"
	I1126 20:24:16.816588  310059 cri.go:89] found id: "32fb436af28ae1c55272f8ad63af213d1dbd9affbdb2dbe7945dedf2c076fc21"
	I1126 20:24:16.816592  310059 cri.go:89] found id: "a2ca658bde7be15ad908300df3903014b37b72b558d8856ce799024e45c11ee0"
	I1126 20:24:16.816605  310059 cri.go:89] found id: ""
	I1126 20:24:16.816650  310059 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 20:24:16.924113  310059 out.go:203] 
	W1126 20:24:16.935129  310059 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:24:16Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:24:16Z" level=error msg="open /run/runc: no such file or directory"
	
	W1126 20:24:16.935150  310059 out.go:285] * 
	* 
	W1126 20:24:16.939067  310059 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1126 20:24:16.940360  310059 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-178152 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-178152
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-178152:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1da700037b3cbba2b479732a8cf72dcd22801db8429cb9a5806c239b30001370",
	        "Created": "2025-11-26T20:22:08.62900996Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 292226,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-26T20:23:11.657604117Z",
	            "FinishedAt": "2025-11-26T20:23:10.784334711Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/1da700037b3cbba2b479732a8cf72dcd22801db8429cb9a5806c239b30001370/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1da700037b3cbba2b479732a8cf72dcd22801db8429cb9a5806c239b30001370/hostname",
	        "HostsPath": "/var/lib/docker/containers/1da700037b3cbba2b479732a8cf72dcd22801db8429cb9a5806c239b30001370/hosts",
	        "LogPath": "/var/lib/docker/containers/1da700037b3cbba2b479732a8cf72dcd22801db8429cb9a5806c239b30001370/1da700037b3cbba2b479732a8cf72dcd22801db8429cb9a5806c239b30001370-json.log",
	        "Name": "/default-k8s-diff-port-178152",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-178152:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-178152",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1da700037b3cbba2b479732a8cf72dcd22801db8429cb9a5806c239b30001370",
	                "LowerDir": "/var/lib/docker/overlay2/1cbf40eef7d3ec80e807f2ef0c1010ea3cdc29bbb3549f4388d400ee4fc8f988-init/diff:/var/lib/docker/overlay2/1661d775281cb7314a654558ee2c4e5880ffafb3a27a085032449cd60a68753a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1cbf40eef7d3ec80e807f2ef0c1010ea3cdc29bbb3549f4388d400ee4fc8f988/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1cbf40eef7d3ec80e807f2ef0c1010ea3cdc29bbb3549f4388d400ee4fc8f988/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1cbf40eef7d3ec80e807f2ef0c1010ea3cdc29bbb3549f4388d400ee4fc8f988/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-178152",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-178152/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-178152",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-178152",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-178152",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "51d6a2d6b828a4389d7418d31f8938c1b70c1c74e08990debe19a35152ddca9c",
	            "SandboxKey": "/var/run/docker/netns/51d6a2d6b828",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-178152": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ec68256d41186ab4784970795756969f4ed3452c84879229e3a4f0a4adc0c9b1",
	                    "EndpointID": "aca567f2caea0d0b708ae3beb594c97b88d938c79b64bfd69c78d96828f66894",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "c2:b7:21:9c:18:45",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-178152",
	                        "1da700037b3c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-178152 -n default-k8s-diff-port-178152
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-178152 -n default-k8s-diff-port-178152: exit status 2 (347.234901ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-178152 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-178152 logs -n 25: (1.106411841s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                     ARGS                                     │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-825702 sudo systemctl cat kubelet --no-pager                         │ auto-825702                  │ jenkins │ v1.37.0 │ 26 Nov 25 20:24 UTC │ 26 Nov 25 20:24 UTC │
	│ ssh     │ -p auto-825702 sudo journalctl -xeu kubelet --all --full --no-pager          │ auto-825702                  │ jenkins │ v1.37.0 │ 26 Nov 25 20:24 UTC │ 26 Nov 25 20:24 UTC │
	│ ssh     │ -p auto-825702 sudo cat /etc/kubernetes/kubelet.conf                         │ auto-825702                  │ jenkins │ v1.37.0 │ 26 Nov 25 20:24 UTC │ 26 Nov 25 20:24 UTC │
	│ ssh     │ -p auto-825702 sudo cat /var/lib/kubelet/config.yaml                         │ auto-825702                  │ jenkins │ v1.37.0 │ 26 Nov 25 20:24 UTC │ 26 Nov 25 20:24 UTC │
	│ ssh     │ -p auto-825702 sudo systemctl status docker --all --full --no-pager          │ auto-825702                  │ jenkins │ v1.37.0 │ 26 Nov 25 20:24 UTC │                     │
	│ ssh     │ -p auto-825702 sudo systemctl cat docker --no-pager                          │ auto-825702                  │ jenkins │ v1.37.0 │ 26 Nov 25 20:24 UTC │ 26 Nov 25 20:24 UTC │
	│ ssh     │ -p auto-825702 sudo cat /etc/docker/daemon.json                              │ auto-825702                  │ jenkins │ v1.37.0 │ 26 Nov 25 20:24 UTC │                     │
	│ ssh     │ -p auto-825702 sudo docker system info                                       │ auto-825702                  │ jenkins │ v1.37.0 │ 26 Nov 25 20:24 UTC │                     │
	│ ssh     │ -p auto-825702 sudo systemctl status cri-docker --all --full --no-pager      │ auto-825702                  │ jenkins │ v1.37.0 │ 26 Nov 25 20:24 UTC │                     │
	│ ssh     │ -p auto-825702 sudo systemctl cat cri-docker --no-pager                      │ auto-825702                  │ jenkins │ v1.37.0 │ 26 Nov 25 20:24 UTC │ 26 Nov 25 20:24 UTC │
	│ ssh     │ -p auto-825702 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ auto-825702                  │ jenkins │ v1.37.0 │ 26 Nov 25 20:24 UTC │                     │
	│ ssh     │ -p auto-825702 sudo cat /usr/lib/systemd/system/cri-docker.service           │ auto-825702                  │ jenkins │ v1.37.0 │ 26 Nov 25 20:24 UTC │ 26 Nov 25 20:24 UTC │
	│ ssh     │ -p auto-825702 sudo cri-dockerd --version                                    │ auto-825702                  │ jenkins │ v1.37.0 │ 26 Nov 25 20:24 UTC │ 26 Nov 25 20:24 UTC │
	│ ssh     │ -p auto-825702 sudo systemctl status containerd --all --full --no-pager      │ auto-825702                  │ jenkins │ v1.37.0 │ 26 Nov 25 20:24 UTC │                     │
	│ ssh     │ -p auto-825702 sudo systemctl cat containerd --no-pager                      │ auto-825702                  │ jenkins │ v1.37.0 │ 26 Nov 25 20:24 UTC │ 26 Nov 25 20:24 UTC │
	│ image   │ default-k8s-diff-port-178152 image list --format=json                        │ default-k8s-diff-port-178152 │ jenkins │ v1.37.0 │ 26 Nov 25 20:24 UTC │ 26 Nov 25 20:24 UTC │
	│ ssh     │ -p auto-825702 sudo cat /lib/systemd/system/containerd.service               │ auto-825702                  │ jenkins │ v1.37.0 │ 26 Nov 25 20:24 UTC │ 26 Nov 25 20:24 UTC │
	│ pause   │ -p default-k8s-diff-port-178152 --alsologtostderr -v=1                       │ default-k8s-diff-port-178152 │ jenkins │ v1.37.0 │ 26 Nov 25 20:24 UTC │                     │
	│ ssh     │ -p auto-825702 sudo cat /etc/containerd/config.toml                          │ auto-825702                  │ jenkins │ v1.37.0 │ 26 Nov 25 20:24 UTC │ 26 Nov 25 20:24 UTC │
	│ ssh     │ -p auto-825702 sudo containerd config dump                                   │ auto-825702                  │ jenkins │ v1.37.0 │ 26 Nov 25 20:24 UTC │ 26 Nov 25 20:24 UTC │
	│ ssh     │ -p auto-825702 sudo systemctl status crio --all --full --no-pager            │ auto-825702                  │ jenkins │ v1.37.0 │ 26 Nov 25 20:24 UTC │ 26 Nov 25 20:24 UTC │
	│ ssh     │ -p auto-825702 sudo systemctl cat crio --no-pager                            │ auto-825702                  │ jenkins │ v1.37.0 │ 26 Nov 25 20:24 UTC │ 26 Nov 25 20:24 UTC │
	│ ssh     │ -p auto-825702 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;  │ auto-825702                  │ jenkins │ v1.37.0 │ 26 Nov 25 20:24 UTC │ 26 Nov 25 20:24 UTC │
	│ ssh     │ -p auto-825702 sudo crio config                                              │ auto-825702                  │ jenkins │ v1.37.0 │ 26 Nov 25 20:24 UTC │ 26 Nov 25 20:24 UTC │
	│ delete  │ -p auto-825702                                                               │ auto-825702                  │ jenkins │ v1.37.0 │ 26 Nov 25 20:24 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/26 20:23:47
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1126 20:23:47.189848  301922 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:23:47.189957  301922 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:23:47.189965  301922 out.go:374] Setting ErrFile to fd 2...
	I1126 20:23:47.189971  301922 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:23:47.190239  301922 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-10722/.minikube/bin
	I1126 20:23:47.190723  301922 out.go:368] Setting JSON to false
	I1126 20:23:47.192077  301922 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3977,"bootTime":1764184650,"procs":444,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1126 20:23:47.192134  301922 start.go:143] virtualization: kvm guest
	I1126 20:23:47.194561  301922 out.go:179] * [calico-825702] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1126 20:23:47.195724  301922 notify.go:221] Checking for updates...
	I1126 20:23:47.195772  301922 out.go:179]   - MINIKUBE_LOCATION=21974
	I1126 20:23:47.197117  301922 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1126 20:23:47.198665  301922 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21974-10722/kubeconfig
	I1126 20:23:47.199967  301922 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-10722/.minikube
	I1126 20:23:47.201285  301922 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1126 20:23:47.202564  301922 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1126 20:23:47.204419  301922 config.go:182] Loaded profile config "auto-825702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:23:47.204573  301922 config.go:182] Loaded profile config "default-k8s-diff-port-178152": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:23:47.204689  301922 config.go:182] Loaded profile config "kindnet-825702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:23:47.204808  301922 driver.go:422] Setting default libvirt URI to qemu:///system
	I1126 20:23:47.229902  301922 docker.go:124] docker version: linux-29.0.4:Docker Engine - Community
	I1126 20:23:47.229971  301922 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:23:47.287928  301922 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-26 20:23:47.277739401 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1126 20:23:47.288076  301922 docker.go:319] overlay module found
	I1126 20:23:47.289832  301922 out.go:179] * Using the docker driver based on user configuration
	I1126 20:23:47.291025  301922 start.go:309] selected driver: docker
	I1126 20:23:47.291039  301922 start.go:927] validating driver "docker" against <nil>
	I1126 20:23:47.291052  301922 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1126 20:23:47.291780  301922 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:23:47.352797  301922 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-26 20:23:47.342766053 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1126 20:23:47.352951  301922 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1126 20:23:47.353144  301922 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1126 20:23:47.354992  301922 out.go:179] * Using Docker driver with root privileges
	I1126 20:23:47.356169  301922 cni.go:84] Creating CNI manager for "calico"
	I1126 20:23:47.356187  301922 start_flags.go:336] Found "Calico" CNI - setting NetworkPlugin=cni
	I1126 20:23:47.356249  301922 start.go:353] cluster config:
	{Name:calico-825702 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-825702 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:23:47.357525  301922 out.go:179] * Starting "calico-825702" primary control-plane node in "calico-825702" cluster
	I1126 20:23:47.358573  301922 cache.go:134] Beginning downloading kic base image for docker with crio
	I1126 20:23:47.359707  301922 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1126 20:23:47.360761  301922 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:23:47.360792  301922 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21974-10722/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1126 20:23:47.360800  301922 cache.go:65] Caching tarball of preloaded images
	I1126 20:23:47.360880  301922 preload.go:238] Found /home/jenkins/minikube-integration/21974-10722/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1126 20:23:47.360893  301922 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1126 20:23:47.360888  301922 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1126 20:23:47.360993  301922 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/calico-825702/config.json ...
	I1126 20:23:47.361017  301922 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/calico-825702/config.json: {Name:mkb261286e1f4d4d01af83fdf3add0b686de212e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:23:47.381692  301922 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1126 20:23:47.381711  301922 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1126 20:23:47.381724  301922 cache.go:243] Successfully downloaded all kic artifacts
	I1126 20:23:47.381746  301922 start.go:360] acquireMachinesLock for calico-825702: {Name:mk2d555972f6c5e77ea8f2b60bfc246817d537f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1126 20:23:47.381838  301922 start.go:364] duration metric: took 78.405µs to acquireMachinesLock for "calico-825702"
	I1126 20:23:47.381860  301922 start.go:93] Provisioning new machine with config: &{Name:calico-825702 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-825702 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1126 20:23:47.381919  301922 start.go:125] createHost starting for "" (driver="docker")
	I1126 20:23:44.508679  290654 node_ready.go:49] node "auto-825702" is "Ready"
	I1126 20:23:44.508755  290654 node_ready.go:38] duration metric: took 11.028306238s for node "auto-825702" to be "Ready" ...
	I1126 20:23:44.508854  290654 api_server.go:52] waiting for apiserver process to appear ...
	I1126 20:23:44.508981  290654 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:23:44.522641  290654 api_server.go:72] duration metric: took 11.332337138s to wait for apiserver process to appear ...
	I1126 20:23:44.522667  290654 api_server.go:88] waiting for apiserver healthz status ...
	I1126 20:23:44.522687  290654 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1126 20:23:44.526748  290654 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1126 20:23:44.527674  290654 api_server.go:141] control plane version: v1.34.1
	I1126 20:23:44.527699  290654 api_server.go:131] duration metric: took 5.024515ms to wait for apiserver health ...
	I1126 20:23:44.527711  290654 system_pods.go:43] waiting for kube-system pods to appear ...
	I1126 20:23:44.697892  290654 system_pods.go:59] 8 kube-system pods found
	I1126 20:23:44.697928  290654 system_pods.go:61] "coredns-66bc5c9577-8lbn9" [9ede6da3-5188-4b17-ad41-9add6a1cc659] Pending
	I1126 20:23:44.697935  290654 system_pods.go:61] "etcd-auto-825702" [5d7b22d0-d28f-4975-a668-0b20f9356f08] Running
	I1126 20:23:44.697940  290654 system_pods.go:61] "kindnet-s68gz" [fd077140-84bb-48cf-9f71-834b22cde6da] Running
	I1126 20:23:44.697945  290654 system_pods.go:61] "kube-apiserver-auto-825702" [61cfca9f-0130-497e-b843-520e347eac4f] Running
	I1126 20:23:44.697951  290654 system_pods.go:61] "kube-controller-manager-auto-825702" [c0f893ee-23e9-40ed-9718-2404eff503b8] Running
	I1126 20:23:44.697955  290654 system_pods.go:61] "kube-proxy-zj978" [c564e4fc-2759-4b29-ad81-85dcd85e1f5d] Running
	I1126 20:23:44.697960  290654 system_pods.go:61] "kube-scheduler-auto-825702" [d708478b-9df4-4e77-ad8e-4c9d748a297a] Running
	I1126 20:23:44.697964  290654 system_pods.go:61] "storage-provisioner" [e6be533e-b1d5-4636-b530-7e8f9cef6c3b] Pending
	I1126 20:23:44.697972  290654 system_pods.go:74] duration metric: took 170.250183ms to wait for pod list to return data ...
	I1126 20:23:44.697980  290654 default_sa.go:34] waiting for default service account to be created ...
	I1126 20:23:44.830743  290654 default_sa.go:45] found service account: "default"
	I1126 20:23:44.830774  290654 default_sa.go:55] duration metric: took 132.786598ms for default service account to be created ...
	I1126 20:23:44.830787  290654 system_pods.go:116] waiting for k8s-apps to be running ...
	I1126 20:23:44.833946  290654 system_pods.go:86] 8 kube-system pods found
	I1126 20:23:44.833980  290654 system_pods.go:89] "coredns-66bc5c9577-8lbn9" [9ede6da3-5188-4b17-ad41-9add6a1cc659] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:23:44.833986  290654 system_pods.go:89] "etcd-auto-825702" [5d7b22d0-d28f-4975-a668-0b20f9356f08] Running
	I1126 20:23:44.833993  290654 system_pods.go:89] "kindnet-s68gz" [fd077140-84bb-48cf-9f71-834b22cde6da] Running
	I1126 20:23:44.833998  290654 system_pods.go:89] "kube-apiserver-auto-825702" [61cfca9f-0130-497e-b843-520e347eac4f] Running
	I1126 20:23:44.834003  290654 system_pods.go:89] "kube-controller-manager-auto-825702" [c0f893ee-23e9-40ed-9718-2404eff503b8] Running
	I1126 20:23:44.834008  290654 system_pods.go:89] "kube-proxy-zj978" [c564e4fc-2759-4b29-ad81-85dcd85e1f5d] Running
	I1126 20:23:44.834013  290654 system_pods.go:89] "kube-scheduler-auto-825702" [d708478b-9df4-4e77-ad8e-4c9d748a297a] Running
	I1126 20:23:44.834018  290654 system_pods.go:89] "storage-provisioner" [e6be533e-b1d5-4636-b530-7e8f9cef6c3b] Pending
	I1126 20:23:44.834040  290654 retry.go:31] will retry after 225.89413ms: missing components: kube-dns
	I1126 20:23:45.063418  290654 system_pods.go:86] 8 kube-system pods found
	I1126 20:23:45.063449  290654 system_pods.go:89] "coredns-66bc5c9577-8lbn9" [9ede6da3-5188-4b17-ad41-9add6a1cc659] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:23:45.063484  290654 system_pods.go:89] "etcd-auto-825702" [5d7b22d0-d28f-4975-a668-0b20f9356f08] Running
	I1126 20:23:45.063492  290654 system_pods.go:89] "kindnet-s68gz" [fd077140-84bb-48cf-9f71-834b22cde6da] Running
	I1126 20:23:45.063498  290654 system_pods.go:89] "kube-apiserver-auto-825702" [61cfca9f-0130-497e-b843-520e347eac4f] Running
	I1126 20:23:45.063509  290654 system_pods.go:89] "kube-controller-manager-auto-825702" [c0f893ee-23e9-40ed-9718-2404eff503b8] Running
	I1126 20:23:45.063518  290654 system_pods.go:89] "kube-proxy-zj978" [c564e4fc-2759-4b29-ad81-85dcd85e1f5d] Running
	I1126 20:23:45.063524  290654 system_pods.go:89] "kube-scheduler-auto-825702" [d708478b-9df4-4e77-ad8e-4c9d748a297a] Running
	I1126 20:23:45.063540  290654 system_pods.go:89] "storage-provisioner" [e6be533e-b1d5-4636-b530-7e8f9cef6c3b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 20:23:45.063559  290654 retry.go:31] will retry after 262.27459ms: missing components: kube-dns
	I1126 20:23:45.387844  290654 system_pods.go:86] 8 kube-system pods found
	I1126 20:23:45.387909  290654 system_pods.go:89] "coredns-66bc5c9577-8lbn9" [9ede6da3-5188-4b17-ad41-9add6a1cc659] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:23:45.387923  290654 system_pods.go:89] "etcd-auto-825702" [5d7b22d0-d28f-4975-a668-0b20f9356f08] Running
	I1126 20:23:45.387931  290654 system_pods.go:89] "kindnet-s68gz" [fd077140-84bb-48cf-9f71-834b22cde6da] Running
	I1126 20:23:45.387937  290654 system_pods.go:89] "kube-apiserver-auto-825702" [61cfca9f-0130-497e-b843-520e347eac4f] Running
	I1126 20:23:45.387942  290654 system_pods.go:89] "kube-controller-manager-auto-825702" [c0f893ee-23e9-40ed-9718-2404eff503b8] Running
	I1126 20:23:45.387951  290654 system_pods.go:89] "kube-proxy-zj978" [c564e4fc-2759-4b29-ad81-85dcd85e1f5d] Running
	I1126 20:23:45.387956  290654 system_pods.go:89] "kube-scheduler-auto-825702" [d708478b-9df4-4e77-ad8e-4c9d748a297a] Running
	I1126 20:23:45.387964  290654 system_pods.go:89] "storage-provisioner" [e6be533e-b1d5-4636-b530-7e8f9cef6c3b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 20:23:45.387980  290654 retry.go:31] will retry after 405.718495ms: missing components: kube-dns
	I1126 20:23:45.797836  290654 system_pods.go:86] 8 kube-system pods found
	I1126 20:23:45.797879  290654 system_pods.go:89] "coredns-66bc5c9577-8lbn9" [9ede6da3-5188-4b17-ad41-9add6a1cc659] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:23:45.797889  290654 system_pods.go:89] "etcd-auto-825702" [5d7b22d0-d28f-4975-a668-0b20f9356f08] Running
	I1126 20:23:45.797895  290654 system_pods.go:89] "kindnet-s68gz" [fd077140-84bb-48cf-9f71-834b22cde6da] Running
	I1126 20:23:45.797901  290654 system_pods.go:89] "kube-apiserver-auto-825702" [61cfca9f-0130-497e-b843-520e347eac4f] Running
	I1126 20:23:45.797907  290654 system_pods.go:89] "kube-controller-manager-auto-825702" [c0f893ee-23e9-40ed-9718-2404eff503b8] Running
	I1126 20:23:45.797915  290654 system_pods.go:89] "kube-proxy-zj978" [c564e4fc-2759-4b29-ad81-85dcd85e1f5d] Running
	I1126 20:23:45.797922  290654 system_pods.go:89] "kube-scheduler-auto-825702" [d708478b-9df4-4e77-ad8e-4c9d748a297a] Running
	I1126 20:23:45.797929  290654 system_pods.go:89] "storage-provisioner" [e6be533e-b1d5-4636-b530-7e8f9cef6c3b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 20:23:45.797946  290654 retry.go:31] will retry after 514.179543ms: missing components: kube-dns
	I1126 20:23:46.316516  290654 system_pods.go:86] 8 kube-system pods found
	I1126 20:23:46.316551  290654 system_pods.go:89] "coredns-66bc5c9577-8lbn9" [9ede6da3-5188-4b17-ad41-9add6a1cc659] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:23:46.316559  290654 system_pods.go:89] "etcd-auto-825702" [5d7b22d0-d28f-4975-a668-0b20f9356f08] Running
	I1126 20:23:46.316570  290654 system_pods.go:89] "kindnet-s68gz" [fd077140-84bb-48cf-9f71-834b22cde6da] Running
	I1126 20:23:46.316579  290654 system_pods.go:89] "kube-apiserver-auto-825702" [61cfca9f-0130-497e-b843-520e347eac4f] Running
	I1126 20:23:46.316588  290654 system_pods.go:89] "kube-controller-manager-auto-825702" [c0f893ee-23e9-40ed-9718-2404eff503b8] Running
	I1126 20:23:46.316594  290654 system_pods.go:89] "kube-proxy-zj978" [c564e4fc-2759-4b29-ad81-85dcd85e1f5d] Running
	I1126 20:23:46.316600  290654 system_pods.go:89] "kube-scheduler-auto-825702" [d708478b-9df4-4e77-ad8e-4c9d748a297a] Running
	I1126 20:23:46.316610  290654 system_pods.go:89] "storage-provisioner" [e6be533e-b1d5-4636-b530-7e8f9cef6c3b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 20:23:46.316630  290654 retry.go:31] will retry after 505.999214ms: missing components: kube-dns
	I1126 20:23:46.827423  290654 system_pods.go:86] 8 kube-system pods found
	I1126 20:23:46.827452  290654 system_pods.go:89] "coredns-66bc5c9577-8lbn9" [9ede6da3-5188-4b17-ad41-9add6a1cc659] Running
	I1126 20:23:46.827486  290654 system_pods.go:89] "etcd-auto-825702" [5d7b22d0-d28f-4975-a668-0b20f9356f08] Running
	I1126 20:23:46.827498  290654 system_pods.go:89] "kindnet-s68gz" [fd077140-84bb-48cf-9f71-834b22cde6da] Running
	I1126 20:23:46.827504  290654 system_pods.go:89] "kube-apiserver-auto-825702" [61cfca9f-0130-497e-b843-520e347eac4f] Running
	I1126 20:23:46.827510  290654 system_pods.go:89] "kube-controller-manager-auto-825702" [c0f893ee-23e9-40ed-9718-2404eff503b8] Running
	I1126 20:23:46.827516  290654 system_pods.go:89] "kube-proxy-zj978" [c564e4fc-2759-4b29-ad81-85dcd85e1f5d] Running
	I1126 20:23:46.827523  290654 system_pods.go:89] "kube-scheduler-auto-825702" [d708478b-9df4-4e77-ad8e-4c9d748a297a] Running
	I1126 20:23:46.827529  290654 system_pods.go:89] "storage-provisioner" [e6be533e-b1d5-4636-b530-7e8f9cef6c3b] Running
	I1126 20:23:46.827539  290654 system_pods.go:126] duration metric: took 1.996744001s to wait for k8s-apps to be running ...
	I1126 20:23:46.827548  290654 system_svc.go:44] waiting for kubelet service to be running ....
	I1126 20:23:46.827596  290654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:23:46.842254  290654 system_svc.go:56] duration metric: took 14.697934ms WaitForService to wait for kubelet
	I1126 20:23:46.842281  290654 kubeadm.go:587] duration metric: took 13.651978541s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1126 20:23:46.842299  290654 node_conditions.go:102] verifying NodePressure condition ...
	I1126 20:23:46.845215  290654 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1126 20:23:46.845239  290654 node_conditions.go:123] node cpu capacity is 8
	I1126 20:23:46.845254  290654 node_conditions.go:105] duration metric: took 2.949322ms to run NodePressure ...
	I1126 20:23:46.845264  290654 start.go:242] waiting for startup goroutines ...
	I1126 20:23:46.845271  290654 start.go:247] waiting for cluster config update ...
	I1126 20:23:46.845279  290654 start.go:256] writing updated cluster config ...
	I1126 20:23:46.845537  290654 ssh_runner.go:195] Run: rm -f paused
	I1126 20:23:46.849594  290654 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 20:23:46.853796  290654 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8lbn9" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:46.858602  290654 pod_ready.go:94] pod "coredns-66bc5c9577-8lbn9" is "Ready"
	I1126 20:23:46.858622  290654 pod_ready.go:86] duration metric: took 4.805106ms for pod "coredns-66bc5c9577-8lbn9" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:46.860617  290654 pod_ready.go:83] waiting for pod "etcd-auto-825702" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:46.864506  290654 pod_ready.go:94] pod "etcd-auto-825702" is "Ready"
	I1126 20:23:46.864526  290654 pod_ready.go:86] duration metric: took 3.888732ms for pod "etcd-auto-825702" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:46.866346  290654 pod_ready.go:83] waiting for pod "kube-apiserver-auto-825702" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:46.869886  290654 pod_ready.go:94] pod "kube-apiserver-auto-825702" is "Ready"
	I1126 20:23:46.869905  290654 pod_ready.go:86] duration metric: took 3.542192ms for pod "kube-apiserver-auto-825702" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:46.871974  290654 pod_ready.go:83] waiting for pod "kube-controller-manager-auto-825702" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:47.254559  290654 pod_ready.go:94] pod "kube-controller-manager-auto-825702" is "Ready"
	I1126 20:23:47.254582  290654 pod_ready.go:86] duration metric: took 382.5885ms for pod "kube-controller-manager-auto-825702" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:47.454812  290654 pod_ready.go:83] waiting for pod "kube-proxy-zj978" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:47.854284  290654 pod_ready.go:94] pod "kube-proxy-zj978" is "Ready"
	I1126 20:23:47.854314  290654 pod_ready.go:86] duration metric: took 399.473345ms for pod "kube-proxy-zj978" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:48.054761  290654 pod_ready.go:83] waiting for pod "kube-scheduler-auto-825702" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:48.454164  290654 pod_ready.go:94] pod "kube-scheduler-auto-825702" is "Ready"
	I1126 20:23:48.454189  290654 pod_ready.go:86] duration metric: took 399.401067ms for pod "kube-scheduler-auto-825702" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:48.454200  290654 pod_ready.go:40] duration metric: took 1.604575261s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 20:23:48.503227  290654 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1126 20:23:48.505850  290654 out.go:179] * Done! kubectl is now configured to use "auto-825702" cluster and "default" namespace by default
	I1126 20:23:45.793335  299373 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21974-10722/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-825702:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir: (4.54296022s)
	I1126 20:23:45.793367  299373 kic.go:203] duration metric: took 4.543107725s to extract preloaded images to volume ...
	W1126 20:23:45.793472  299373 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1126 20:23:45.793515  299373 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1126 20:23:45.793561  299373 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1126 20:23:45.856182  299373 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-825702 --name kindnet-825702 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-825702 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-825702 --network kindnet-825702 --ip 192.168.76.2 --volume kindnet-825702:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1126 20:23:46.203867  299373 cli_runner.go:164] Run: docker container inspect kindnet-825702 --format={{.State.Running}}
	I1126 20:23:46.223678  299373 cli_runner.go:164] Run: docker container inspect kindnet-825702 --format={{.State.Status}}
	I1126 20:23:46.243533  299373 cli_runner.go:164] Run: docker exec kindnet-825702 stat /var/lib/dpkg/alternatives/iptables
	I1126 20:23:46.315187  299373 oci.go:144] the created container "kindnet-825702" has a running status.
	I1126 20:23:46.315216  299373 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21974-10722/.minikube/machines/kindnet-825702/id_rsa...
	I1126 20:23:46.351483  299373 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21974-10722/.minikube/machines/kindnet-825702/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1126 20:23:46.729438  299373 cli_runner.go:164] Run: docker container inspect kindnet-825702 --format={{.State.Status}}
	I1126 20:23:46.749322  299373 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1126 20:23:46.749342  299373 kic_runner.go:114] Args: [docker exec --privileged kindnet-825702 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1126 20:23:46.798515  299373 cli_runner.go:164] Run: docker container inspect kindnet-825702 --format={{.State.Status}}
	I1126 20:23:46.817923  299373 machine.go:94] provisionDockerMachine start ...
	I1126 20:23:46.818039  299373 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-825702
	I1126 20:23:46.838583  299373 main.go:143] libmachine: Using SSH client type: native
	I1126 20:23:46.838912  299373 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1126 20:23:46.838934  299373 main.go:143] libmachine: About to run SSH command:
	hostname
	I1126 20:23:46.986273  299373 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-825702
	
	I1126 20:23:46.986304  299373 ubuntu.go:182] provisioning hostname "kindnet-825702"
	I1126 20:23:46.986353  299373 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-825702
	I1126 20:23:47.006139  299373 main.go:143] libmachine: Using SSH client type: native
	I1126 20:23:47.006352  299373 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1126 20:23:47.006367  299373 main.go:143] libmachine: About to run SSH command:
	sudo hostname kindnet-825702 && echo "kindnet-825702" | sudo tee /etc/hostname
	I1126 20:23:47.158629  299373 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-825702
	
	I1126 20:23:47.158691  299373 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-825702
	I1126 20:23:47.177826  299373 main.go:143] libmachine: Using SSH client type: native
	I1126 20:23:47.178121  299373 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1126 20:23:47.178143  299373 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-825702' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-825702/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-825702' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1126 20:23:47.322594  299373 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1126 20:23:47.322622  299373 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21974-10722/.minikube CaCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21974-10722/.minikube}
	I1126 20:23:47.322667  299373 ubuntu.go:190] setting up certificates
	I1126 20:23:47.322680  299373 provision.go:84] configureAuth start
	I1126 20:23:47.322728  299373 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-825702
	I1126 20:23:47.342348  299373 provision.go:143] copyHostCerts
	I1126 20:23:47.342417  299373 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-10722/.minikube/cert.pem, removing ...
	I1126 20:23:47.342432  299373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-10722/.minikube/cert.pem
	I1126 20:23:47.342542  299373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/cert.pem (1123 bytes)
	I1126 20:23:47.342656  299373 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-10722/.minikube/key.pem, removing ...
	I1126 20:23:47.342670  299373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-10722/.minikube/key.pem
	I1126 20:23:47.342710  299373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/key.pem (1675 bytes)
	I1126 20:23:47.342774  299373 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-10722/.minikube/ca.pem, removing ...
	I1126 20:23:47.342790  299373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-10722/.minikube/ca.pem
	I1126 20:23:47.342827  299373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/ca.pem (1078 bytes)
	I1126 20:23:47.342900  299373 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem org=jenkins.kindnet-825702 san=[127.0.0.1 192.168.76.2 kindnet-825702 localhost minikube]
	I1126 20:23:47.508300  299373 provision.go:177] copyRemoteCerts
	I1126 20:23:47.508346  299373 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1126 20:23:47.508380  299373 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-825702
	I1126 20:23:47.527269  299373 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/kindnet-825702/id_rsa Username:docker}
	I1126 20:23:47.628450  299373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1126 20:23:47.652806  299373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1126 20:23:47.675009  299373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1126 20:23:47.693395  299373 provision.go:87] duration metric: took 370.702667ms to configureAuth
	I1126 20:23:47.693421  299373 ubuntu.go:206] setting minikube options for container-runtime
	I1126 20:23:47.693603  299373 config.go:182] Loaded profile config "kindnet-825702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:23:47.693712  299373 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-825702
	I1126 20:23:47.715012  299373 main.go:143] libmachine: Using SSH client type: native
	I1126 20:23:47.715304  299373 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1126 20:23:47.715327  299373 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1126 20:23:48.029803  299373 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1126 20:23:48.029824  299373 machine.go:97] duration metric: took 1.211848447s to provisionDockerMachine
	I1126 20:23:48.029838  299373 client.go:176] duration metric: took 7.356770582s to LocalClient.Create
	I1126 20:23:48.029849  299373 start.go:167] duration metric: took 7.35683624s to libmachine.API.Create "kindnet-825702"
	I1126 20:23:48.029855  299373 start.go:293] postStartSetup for "kindnet-825702" (driver="docker")
	I1126 20:23:48.029864  299373 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1126 20:23:48.029912  299373 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1126 20:23:48.029949  299373 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-825702
	I1126 20:23:48.052127  299373 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/kindnet-825702/id_rsa Username:docker}
	I1126 20:23:48.157496  299373 ssh_runner.go:195] Run: cat /etc/os-release
	I1126 20:23:48.161331  299373 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1126 20:23:48.161355  299373 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1126 20:23:48.161367  299373 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-10722/.minikube/addons for local assets ...
	I1126 20:23:48.161438  299373 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-10722/.minikube/files for local assets ...
	I1126 20:23:48.161544  299373 filesync.go:149] local asset: /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem -> 142582.pem in /etc/ssl/certs
	I1126 20:23:48.161673  299373 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1126 20:23:48.168981  299373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem --> /etc/ssl/certs/142582.pem (1708 bytes)
	I1126 20:23:48.188573  299373 start.go:296] duration metric: took 158.707835ms for postStartSetup
	I1126 20:23:48.188945  299373 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-825702
	I1126 20:23:48.210197  299373 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kindnet-825702/config.json ...
	I1126 20:23:48.210417  299373 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1126 20:23:48.210479  299373 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-825702
	I1126 20:23:48.229744  299373 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/kindnet-825702/id_rsa Username:docker}
	I1126 20:23:48.328760  299373 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1126 20:23:48.333488  299373 start.go:128] duration metric: took 7.662588038s to createHost
	I1126 20:23:48.333515  299373 start.go:83] releasing machines lock for "kindnet-825702", held for 7.662752378s
	I1126 20:23:48.333591  299373 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-825702
	I1126 20:23:48.352907  299373 ssh_runner.go:195] Run: cat /version.json
	I1126 20:23:48.352958  299373 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-825702
	I1126 20:23:48.352989  299373 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1126 20:23:48.353065  299373 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-825702
	I1126 20:23:48.371914  299373 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/kindnet-825702/id_rsa Username:docker}
	I1126 20:23:48.374515  299373 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/kindnet-825702/id_rsa Username:docker}
	I1126 20:23:48.548180  299373 ssh_runner.go:195] Run: systemctl --version
	I1126 20:23:48.556577  299373 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1126 20:23:48.596317  299373 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1126 20:23:48.602266  299373 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1126 20:23:48.602363  299373 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1126 20:23:48.632866  299373 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1126 20:23:48.632893  299373 start.go:496] detecting cgroup driver to use...
	I1126 20:23:48.632925  299373 detect.go:190] detected "systemd" cgroup driver on host os
	I1126 20:23:48.632970  299373 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1126 20:23:48.653586  299373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1126 20:23:48.669406  299373 docker.go:218] disabling cri-docker service (if available) ...
	I1126 20:23:48.669588  299373 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1126 20:23:48.692169  299373 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1126 20:23:48.715327  299373 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1126 20:23:48.820563  299373 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1126 20:23:48.920490  299373 docker.go:234] disabling docker service ...
	I1126 20:23:48.920556  299373 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1126 20:23:48.943723  299373 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1126 20:23:48.957969  299373 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1126 20:23:49.082396  299373 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1126 20:23:49.176356  299373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1126 20:23:49.190094  299373 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1126 20:23:49.207004  299373 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1126 20:23:49.207073  299373 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:23:49.217706  299373 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1126 20:23:49.217763  299373 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:23:49.226617  299373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:23:49.235292  299373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:23:49.243823  299373 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1126 20:23:49.251931  299373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:23:49.260429  299373 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:23:49.274135  299373 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:23:49.283794  299373 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1126 20:23:49.291047  299373 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1126 20:23:49.298102  299373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:23:49.385437  299373 ssh_runner.go:195] Run: sudo systemctl restart crio
	W1126 20:23:48.200948  292013 pod_ready.go:104] pod "coredns-66bc5c9577-tpmmm" is not "Ready", error: <nil>
	W1126 20:23:50.201099  292013 pod_ready.go:104] pod "coredns-66bc5c9577-tpmmm" is not "Ready", error: <nil>
	I1126 20:23:51.787724  299373 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.401986212s)
	I1126 20:23:51.787765  299373 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1126 20:23:51.787822  299373 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1126 20:23:51.794485  299373 start.go:564] Will wait 60s for crictl version
	I1126 20:23:51.794545  299373 ssh_runner.go:195] Run: which crictl
	I1126 20:23:51.800840  299373 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1126 20:23:51.839310  299373 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1126 20:23:51.839418  299373 ssh_runner.go:195] Run: crio --version
	I1126 20:23:51.880720  299373 ssh_runner.go:195] Run: crio --version
	I1126 20:23:51.927354  299373 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1126 20:23:47.384542  301922 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1126 20:23:47.384764  301922 start.go:159] libmachine.API.Create for "calico-825702" (driver="docker")
	I1126 20:23:47.384819  301922 client.go:173] LocalClient.Create starting
	I1126 20:23:47.384897  301922 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem
	I1126 20:23:47.384930  301922 main.go:143] libmachine: Decoding PEM data...
	I1126 20:23:47.384948  301922 main.go:143] libmachine: Parsing certificate...
	I1126 20:23:47.385006  301922 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem
	I1126 20:23:47.385026  301922 main.go:143] libmachine: Decoding PEM data...
	I1126 20:23:47.385037  301922 main.go:143] libmachine: Parsing certificate...
	I1126 20:23:47.385331  301922 cli_runner.go:164] Run: docker network inspect calico-825702 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1126 20:23:47.402007  301922 cli_runner.go:211] docker network inspect calico-825702 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1126 20:23:47.402067  301922 network_create.go:284] running [docker network inspect calico-825702] to gather additional debugging logs...
	I1126 20:23:47.402089  301922 cli_runner.go:164] Run: docker network inspect calico-825702
	W1126 20:23:47.418384  301922 cli_runner.go:211] docker network inspect calico-825702 returned with exit code 1
	I1126 20:23:47.418403  301922 network_create.go:287] error running [docker network inspect calico-825702]: docker network inspect calico-825702: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-825702 not found
	I1126 20:23:47.418424  301922 network_create.go:289] output of [docker network inspect calico-825702]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-825702 not found
	
	** /stderr **
	I1126 20:23:47.418562  301922 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1126 20:23:47.437835  301922 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9f2fbfec5d4b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:aa:65:0e:d3:0d:13} reservation:<nil>}
	I1126 20:23:47.438546  301922 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bbc6aaddce08 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3a:93:ea:3f:0d:7e} reservation:<nil>}
	I1126 20:23:47.439254  301922 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ecd673f5b2f2 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:de:d0:57:1a:06:91} reservation:<nil>}
	I1126 20:23:47.440093  301922 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-bbb3cbf3682c IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:9a:00:30:1a:2a:ff} reservation:<nil>}
	I1126 20:23:47.440717  301922 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-ec68256d4118 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:72:5d:f9:71:de:9b} reservation:<nil>}
	I1126 20:23:47.441770  301922 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f882f0}
	I1126 20:23:47.441805  301922 network_create.go:124] attempt to create docker network calico-825702 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1126 20:23:47.441857  301922 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-825702 calico-825702
	I1126 20:23:47.490192  301922 network_create.go:108] docker network calico-825702 192.168.94.0/24 created
	I1126 20:23:47.490227  301922 kic.go:121] calculated static IP "192.168.94.2" for the "calico-825702" container
	I1126 20:23:47.490292  301922 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1126 20:23:47.510542  301922 cli_runner.go:164] Run: docker volume create calico-825702 --label name.minikube.sigs.k8s.io=calico-825702 --label created_by.minikube.sigs.k8s.io=true
	I1126 20:23:47.529620  301922 oci.go:103] Successfully created a docker volume calico-825702
	I1126 20:23:47.529730  301922 cli_runner.go:164] Run: docker run --rm --name calico-825702-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-825702 --entrypoint /usr/bin/test -v calico-825702:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1126 20:23:47.919043  301922 oci.go:107] Successfully prepared a docker volume calico-825702
	I1126 20:23:47.919105  301922 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:23:47.919118  301922 kic.go:194] Starting extracting preloaded images to volume ...
	I1126 20:23:47.919188  301922 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21974-10722/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-825702:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir
	I1126 20:23:51.718046  301922 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21974-10722/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-825702:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir: (3.798811549s)
	I1126 20:23:51.718073  301922 kic.go:203] duration metric: took 3.79895426s to extract preloaded images to volume ...
	W1126 20:23:51.718159  301922 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1126 20:23:51.718205  301922 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1126 20:23:51.718250  301922 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1126 20:23:51.803062  301922 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-825702 --name calico-825702 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-825702 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-825702 --network calico-825702 --ip 192.168.94.2 --volume calico-825702:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1126 20:23:51.928816  299373 cli_runner.go:164] Run: docker network inspect kindnet-825702 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1126 20:23:51.953067  299373 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1126 20:23:51.959158  299373 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:23:51.972160  299373 kubeadm.go:884] updating cluster {Name:kindnet-825702 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-825702 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1126 20:23:51.972304  299373 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:23:51.972357  299373 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:23:52.026301  299373 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:23:52.026330  299373 crio.go:433] Images already preloaded, skipping extraction
	I1126 20:23:52.026384  299373 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:23:52.064356  299373 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:23:52.064433  299373 cache_images.go:86] Images are preloaded, skipping loading
	I1126 20:23:52.064478  299373 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1126 20:23:52.064645  299373 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kindnet-825702 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:kindnet-825702 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I1126 20:23:52.064865  299373 ssh_runner.go:195] Run: crio config
	I1126 20:23:52.143227  299373 cni.go:84] Creating CNI manager for "kindnet"
	I1126 20:23:52.143261  299373 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1126 20:23:52.143292  299373 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-825702 NodeName:kindnet-825702 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1126 20:23:52.143453  299373 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-825702"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1126 20:23:52.143551  299373 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1126 20:23:52.154670  299373 binaries.go:51] Found k8s binaries, skipping transfer
	I1126 20:23:52.154742  299373 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1126 20:23:52.164507  299373 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (364 bytes)
	I1126 20:23:52.183112  299373 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1126 20:23:52.204293  299373 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1126 20:23:52.221216  299373 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1126 20:23:52.225616  299373 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:23:52.237744  299373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:23:52.347578  299373 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:23:52.372296  299373 certs.go:69] Setting up /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kindnet-825702 for IP: 192.168.76.2
	I1126 20:23:52.372317  299373 certs.go:195] generating shared ca certs ...
	I1126 20:23:52.372337  299373 certs.go:227] acquiring lock for ca certs: {Name:mk4795cc985d88cb969cdd6a9c35d3c72f02dfea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:23:52.372557  299373 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21974-10722/.minikube/ca.key
	I1126 20:23:52.372629  299373 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.key
	I1126 20:23:52.372644  299373 certs.go:257] generating profile certs ...
	I1126 20:23:52.372723  299373 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kindnet-825702/client.key
	I1126 20:23:52.372738  299373 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kindnet-825702/client.crt with IP's: []
	I1126 20:23:52.451226  299373 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kindnet-825702/client.crt ...
	I1126 20:23:52.451260  299373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kindnet-825702/client.crt: {Name:mke953dd7968e23857340a97386719eb22be1c4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:23:52.451443  299373 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kindnet-825702/client.key ...
	I1126 20:23:52.451473  299373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kindnet-825702/client.key: {Name:mk2677457717c3733bee89a1d00ffb348a73cf4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:23:52.451607  299373 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kindnet-825702/apiserver.key.f4b9e689
	I1126 20:23:52.451632  299373 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kindnet-825702/apiserver.crt.f4b9e689 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1126 20:23:52.659651  299373 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kindnet-825702/apiserver.crt.f4b9e689 ...
	I1126 20:23:52.659675  299373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kindnet-825702/apiserver.crt.f4b9e689: {Name:mk4ab2cf14f50dedda220d9db59a04a09297a2df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:23:52.659851  299373 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kindnet-825702/apiserver.key.f4b9e689 ...
	I1126 20:23:52.659869  299373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kindnet-825702/apiserver.key.f4b9e689: {Name:mk0378cd86c079f093c2f36200397fef79275ae3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:23:52.659967  299373 certs.go:382] copying /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kindnet-825702/apiserver.crt.f4b9e689 -> /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kindnet-825702/apiserver.crt
	I1126 20:23:52.660058  299373 certs.go:386] copying /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kindnet-825702/apiserver.key.f4b9e689 -> /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kindnet-825702/apiserver.key
	I1126 20:23:52.660131  299373 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kindnet-825702/proxy-client.key
	I1126 20:23:52.660153  299373 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kindnet-825702/proxy-client.crt with IP's: []
	I1126 20:23:52.703185  299373 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kindnet-825702/proxy-client.crt ...
	I1126 20:23:52.703213  299373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kindnet-825702/proxy-client.crt: {Name:mkf29caa948722fe21b05adfd9a6900914e9f54f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:23:52.703368  299373 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kindnet-825702/proxy-client.key ...
	I1126 20:23:52.703382  299373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kindnet-825702/proxy-client.key: {Name:mkbbd595893f48f215568b4c50887a57446454b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:23:52.703651  299373 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/14258.pem (1338 bytes)
	W1126 20:23:52.703710  299373 certs.go:480] ignoring /home/jenkins/minikube-integration/21974-10722/.minikube/certs/14258_empty.pem, impossibly tiny 0 bytes
	I1126 20:23:52.703727  299373 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem (1679 bytes)
	I1126 20:23:52.703768  299373 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem (1078 bytes)
	I1126 20:23:52.703806  299373 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem (1123 bytes)
	I1126 20:23:52.703843  299373 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem (1675 bytes)
	I1126 20:23:52.703915  299373 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem (1708 bytes)
	I1126 20:23:52.704614  299373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1126 20:23:52.726616  299373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1126 20:23:52.749402  299373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1126 20:23:52.773210  299373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1126 20:23:52.796723  299373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kindnet-825702/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1126 20:23:52.817151  299373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kindnet-825702/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1126 20:23:52.839422  299373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kindnet-825702/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1126 20:23:52.862609  299373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kindnet-825702/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1126 20:23:52.884097  299373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/certs/14258.pem --> /usr/share/ca-certificates/14258.pem (1338 bytes)
	I1126 20:23:52.909769  299373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem --> /usr/share/ca-certificates/142582.pem (1708 bytes)
	I1126 20:23:52.935154  299373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1126 20:23:52.956212  299373 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1126 20:23:52.973806  299373 ssh_runner.go:195] Run: openssl version
	I1126 20:23:52.981751  299373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142582.pem && ln -fs /usr/share/ca-certificates/142582.pem /etc/ssl/certs/142582.pem"
	I1126 20:23:52.992940  299373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142582.pem
	I1126 20:23:52.998818  299373 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 26 19:41 /usr/share/ca-certificates/142582.pem
	I1126 20:23:52.998873  299373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142582.pem
	I1126 20:23:53.046862  299373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142582.pem /etc/ssl/certs/3ec20f2e.0"
	I1126 20:23:53.056085  299373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1126 20:23:53.065426  299373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:23:53.069888  299373 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 26 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:23:53.069942  299373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:23:53.113677  299373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1126 20:23:53.124310  299373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14258.pem && ln -fs /usr/share/ca-certificates/14258.pem /etc/ssl/certs/14258.pem"
	I1126 20:23:53.134994  299373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14258.pem
	I1126 20:23:53.139085  299373 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 26 19:41 /usr/share/ca-certificates/14258.pem
	I1126 20:23:53.139153  299373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14258.pem
	I1126 20:23:53.186819  299373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14258.pem /etc/ssl/certs/51391683.0"
	I1126 20:23:53.197687  299373 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1126 20:23:53.202789  299373 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1126 20:23:53.202844  299373 kubeadm.go:401] StartCluster: {Name:kindnet-825702 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-825702 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:23:53.202948  299373 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 20:23:53.203003  299373 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 20:23:53.237609  299373 cri.go:89] found id: ""
	I1126 20:23:53.237681  299373 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1126 20:23:53.247899  299373 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1126 20:23:53.258375  299373 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1126 20:23:53.258426  299373 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1126 20:23:53.268115  299373 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1126 20:23:53.268135  299373 kubeadm.go:158] found existing configuration files:
	
	I1126 20:23:53.268185  299373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1126 20:23:53.277837  299373 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1126 20:23:53.277889  299373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1126 20:23:53.287374  299373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1126 20:23:53.296497  299373 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1126 20:23:53.296543  299373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1126 20:23:53.305110  299373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1126 20:23:53.314492  299373 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1126 20:23:53.314541  299373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1126 20:23:53.323438  299373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1126 20:23:53.332867  299373 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1126 20:23:53.332912  299373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1126 20:23:53.342352  299373 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1126 20:23:53.407775  299373 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1126 20:23:53.472376  299373 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1126 20:23:52.232689  301922 cli_runner.go:164] Run: docker container inspect calico-825702 --format={{.State.Running}}
	I1126 20:23:52.254397  301922 cli_runner.go:164] Run: docker container inspect calico-825702 --format={{.State.Status}}
	I1126 20:23:52.279673  301922 cli_runner.go:164] Run: docker exec calico-825702 stat /var/lib/dpkg/alternatives/iptables
	I1126 20:23:52.346298  301922 oci.go:144] the created container "calico-825702" has a running status.
	I1126 20:23:52.346330  301922 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21974-10722/.minikube/machines/calico-825702/id_rsa...
	I1126 20:23:52.845596  301922 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21974-10722/.minikube/machines/calico-825702/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1126 20:23:52.879697  301922 cli_runner.go:164] Run: docker container inspect calico-825702 --format={{.State.Status}}
	I1126 20:23:52.903068  301922 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1126 20:23:52.903102  301922 kic_runner.go:114] Args: [docker exec --privileged calico-825702 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1126 20:23:52.960815  301922 cli_runner.go:164] Run: docker container inspect calico-825702 --format={{.State.Status}}
	I1126 20:23:52.983354  301922 machine.go:94] provisionDockerMachine start ...
	I1126 20:23:52.983442  301922 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-825702
	I1126 20:23:53.007190  301922 main.go:143] libmachine: Using SSH client type: native
	I1126 20:23:53.007523  301922 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1126 20:23:53.007544  301922 main.go:143] libmachine: About to run SSH command:
	hostname
	I1126 20:23:53.152041  301922 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-825702
	
	I1126 20:23:53.152070  301922 ubuntu.go:182] provisioning hostname "calico-825702"
	I1126 20:23:53.152128  301922 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-825702
	I1126 20:23:53.175500  301922 main.go:143] libmachine: Using SSH client type: native
	I1126 20:23:53.175790  301922 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1126 20:23:53.175814  301922 main.go:143] libmachine: About to run SSH command:
	sudo hostname calico-825702 && echo "calico-825702" | sudo tee /etc/hostname
	I1126 20:23:53.344912  301922 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-825702
	
	I1126 20:23:53.344997  301922 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-825702
	I1126 20:23:53.366624  301922 main.go:143] libmachine: Using SSH client type: native
	I1126 20:23:53.366927  301922 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1126 20:23:53.366955  301922 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-825702' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-825702/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-825702' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1126 20:23:53.515999  301922 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1126 20:23:53.516036  301922 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21974-10722/.minikube CaCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21974-10722/.minikube}
	I1126 20:23:53.516083  301922 ubuntu.go:190] setting up certificates
	I1126 20:23:53.516098  301922 provision.go:84] configureAuth start
	I1126 20:23:53.516160  301922 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-825702
	I1126 20:23:53.534836  301922 provision.go:143] copyHostCerts
	I1126 20:23:53.534900  301922 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-10722/.minikube/cert.pem, removing ...
	I1126 20:23:53.534908  301922 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-10722/.minikube/cert.pem
	I1126 20:23:53.534988  301922 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/cert.pem (1123 bytes)
	I1126 20:23:53.535091  301922 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-10722/.minikube/key.pem, removing ...
	I1126 20:23:53.535102  301922 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-10722/.minikube/key.pem
	I1126 20:23:53.535138  301922 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/key.pem (1675 bytes)
	I1126 20:23:53.535209  301922 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-10722/.minikube/ca.pem, removing ...
	I1126 20:23:53.535218  301922 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-10722/.minikube/ca.pem
	I1126 20:23:53.535248  301922 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/ca.pem (1078 bytes)
	I1126 20:23:53.535315  301922 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem org=jenkins.calico-825702 san=[127.0.0.1 192.168.94.2 calico-825702 localhost minikube]
	I1126 20:23:53.591294  301922 provision.go:177] copyRemoteCerts
	I1126 20:23:53.591360  301922 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1126 20:23:53.591434  301922 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-825702
	I1126 20:23:53.608811  301922 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/calico-825702/id_rsa Username:docker}
	I1126 20:23:53.707057  301922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1126 20:23:53.725312  301922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1126 20:23:53.741919  301922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1126 20:23:53.758223  301922 provision.go:87] duration metric: took 242.112645ms to configureAuth
	I1126 20:23:53.758245  301922 ubuntu.go:206] setting minikube options for container-runtime
	I1126 20:23:53.758385  301922 config.go:182] Loaded profile config "calico-825702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:23:53.758550  301922 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-825702
	I1126 20:23:53.776347  301922 main.go:143] libmachine: Using SSH client type: native
	I1126 20:23:53.776605  301922 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1126 20:23:53.776629  301922 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1126 20:23:54.050892  301922 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1126 20:23:54.050915  301922 machine.go:97] duration metric: took 1.067537807s to provisionDockerMachine
	I1126 20:23:54.050927  301922 client.go:176] duration metric: took 6.666100233s to LocalClient.Create
	I1126 20:23:54.050948  301922 start.go:167] duration metric: took 6.66618385s to libmachine.API.Create "calico-825702"
	I1126 20:23:54.050959  301922 start.go:293] postStartSetup for "calico-825702" (driver="docker")
	I1126 20:23:54.050974  301922 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1126 20:23:54.051055  301922 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1126 20:23:54.051102  301922 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-825702
	I1126 20:23:54.069703  301922 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/calico-825702/id_rsa Username:docker}
	I1126 20:23:54.170490  301922 ssh_runner.go:195] Run: cat /etc/os-release
	I1126 20:23:54.173899  301922 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1126 20:23:54.173923  301922 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1126 20:23:54.173934  301922 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-10722/.minikube/addons for local assets ...
	I1126 20:23:54.173990  301922 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-10722/.minikube/files for local assets ...
	I1126 20:23:54.174079  301922 filesync.go:149] local asset: /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem -> 142582.pem in /etc/ssl/certs
	I1126 20:23:54.174193  301922 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1126 20:23:54.181770  301922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem --> /etc/ssl/certs/142582.pem (1708 bytes)
	I1126 20:23:54.203567  301922 start.go:296] duration metric: took 152.593375ms for postStartSetup
	I1126 20:23:54.203966  301922 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-825702
	I1126 20:23:54.227877  301922 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/calico-825702/config.json ...
	I1126 20:23:54.228113  301922 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1126 20:23:54.228153  301922 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-825702
	I1126 20:23:54.246033  301922 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/calico-825702/id_rsa Username:docker}
	I1126 20:23:54.342660  301922 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1126 20:23:54.347188  301922 start.go:128] duration metric: took 6.965258038s to createHost
	I1126 20:23:54.347209  301922 start.go:83] releasing machines lock for "calico-825702", held for 6.965359491s
	I1126 20:23:54.347278  301922 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-825702
	I1126 20:23:54.365495  301922 ssh_runner.go:195] Run: cat /version.json
	I1126 20:23:54.365535  301922 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1126 20:23:54.365552  301922 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-825702
	I1126 20:23:54.365614  301922 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-825702
	I1126 20:23:54.384799  301922 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/calico-825702/id_rsa Username:docker}
	I1126 20:23:54.385666  301922 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/calico-825702/id_rsa Username:docker}
	I1126 20:23:54.550418  301922 ssh_runner.go:195] Run: systemctl --version
	I1126 20:23:54.557448  301922 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1126 20:23:54.590674  301922 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1126 20:23:54.594999  301922 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1126 20:23:54.595057  301922 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1126 20:23:54.619364  301922 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1126 20:23:54.619381  301922 start.go:496] detecting cgroup driver to use...
	I1126 20:23:54.619405  301922 detect.go:190] detected "systemd" cgroup driver on host os
	I1126 20:23:54.619441  301922 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1126 20:23:54.634668  301922 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1126 20:23:54.646269  301922 docker.go:218] disabling cri-docker service (if available) ...
	I1126 20:23:54.646312  301922 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1126 20:23:54.661594  301922 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1126 20:23:54.682857  301922 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1126 20:23:54.773728  301922 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1126 20:23:54.870129  301922 docker.go:234] disabling docker service ...
	I1126 20:23:54.870190  301922 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1126 20:23:54.887501  301922 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1126 20:23:54.899403  301922 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1126 20:23:54.985503  301922 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1126 20:23:55.066410  301922 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1126 20:23:55.078398  301922 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1126 20:23:55.092208  301922 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1126 20:23:55.092272  301922 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:23:55.104946  301922 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1126 20:23:55.105002  301922 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:23:55.113316  301922 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:23:55.121861  301922 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:23:55.130432  301922 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1126 20:23:55.137951  301922 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:23:55.145917  301922 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:23:55.159553  301922 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:23:55.167554  301922 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1126 20:23:55.174353  301922 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1126 20:23:55.181028  301922 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:23:55.257553  301922 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1126 20:23:55.564077  301922 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1126 20:23:55.564154  301922 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1126 20:23:55.568154  301922 start.go:564] Will wait 60s for crictl version
	I1126 20:23:55.568207  301922 ssh_runner.go:195] Run: which crictl
	I1126 20:23:55.571719  301922 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1126 20:23:55.595758  301922 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1126 20:23:55.595834  301922 ssh_runner.go:195] Run: crio --version
	I1126 20:23:55.622499  301922 ssh_runner.go:195] Run: crio --version
	I1126 20:23:55.651199  301922 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	W1126 20:23:52.201745  292013 pod_ready.go:104] pod "coredns-66bc5c9577-tpmmm" is not "Ready", error: <nil>
	W1126 20:23:54.206831  292013 pod_ready.go:104] pod "coredns-66bc5c9577-tpmmm" is not "Ready", error: <nil>
	I1126 20:23:55.652327  301922 cli_runner.go:164] Run: docker network inspect calico-825702 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1126 20:23:55.671350  301922 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1126 20:23:55.675285  301922 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:23:55.685531  301922 kubeadm.go:884] updating cluster {Name:calico-825702 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-825702 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1126 20:23:55.685648  301922 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:23:55.685703  301922 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:23:55.718537  301922 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:23:55.718557  301922 crio.go:433] Images already preloaded, skipping extraction
	I1126 20:23:55.718601  301922 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:23:55.743609  301922 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:23:55.743627  301922 cache_images.go:86] Images are preloaded, skipping loading
	I1126 20:23:55.743636  301922 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1126 20:23:55.743732  301922 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=calico-825702 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:calico-825702 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I1126 20:23:55.743808  301922 ssh_runner.go:195] Run: crio config
	I1126 20:23:55.792565  301922 cni.go:84] Creating CNI manager for "calico"
	I1126 20:23:55.792593  301922 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1126 20:23:55.792612  301922 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-825702 NodeName:calico-825702 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1126 20:23:55.792751  301922 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-825702"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1126 20:23:55.792832  301922 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1126 20:23:55.800817  301922 binaries.go:51] Found k8s binaries, skipping transfer
	I1126 20:23:55.800879  301922 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1126 20:23:55.808376  301922 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1126 20:23:55.820303  301922 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1126 20:23:55.834946  301922 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1126 20:23:55.846922  301922 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1126 20:23:55.850109  301922 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:23:55.859786  301922 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:23:55.940056  301922 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:23:55.974712  301922 certs.go:69] Setting up /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/calico-825702 for IP: 192.168.94.2
	I1126 20:23:55.974732  301922 certs.go:195] generating shared ca certs ...
	I1126 20:23:55.974765  301922 certs.go:227] acquiring lock for ca certs: {Name:mk4795cc985d88cb969cdd6a9c35d3c72f02dfea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:23:55.974929  301922 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21974-10722/.minikube/ca.key
	I1126 20:23:55.974979  301922 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.key
	I1126 20:23:55.974992  301922 certs.go:257] generating profile certs ...
	I1126 20:23:55.975043  301922 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/calico-825702/client.key
	I1126 20:23:55.975056  301922 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/calico-825702/client.crt with IP's: []
	I1126 20:23:56.061237  301922 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/calico-825702/client.crt ...
	I1126 20:23:56.061263  301922 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/calico-825702/client.crt: {Name:mkff436b36917f3276c4d326ee3c93b943c50217 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:23:56.061470  301922 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/calico-825702/client.key ...
	I1126 20:23:56.061499  301922 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/calico-825702/client.key: {Name:mke9be3470e889891f1d211221ee9570ae55f9b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:23:56.061632  301922 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/calico-825702/apiserver.key.331a6116
	I1126 20:23:56.061651  301922 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/calico-825702/apiserver.crt.331a6116 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1126 20:23:56.182855  301922 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/calico-825702/apiserver.crt.331a6116 ...
	I1126 20:23:56.182879  301922 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/calico-825702/apiserver.crt.331a6116: {Name:mk77e9a94b309aa554928316f36f0d1850b38498 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:23:56.183045  301922 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/calico-825702/apiserver.key.331a6116 ...
	I1126 20:23:56.183062  301922 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/calico-825702/apiserver.key.331a6116: {Name:mkcafc5699394c4503433eea215ea165fccba0b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:23:56.183169  301922 certs.go:382] copying /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/calico-825702/apiserver.crt.331a6116 -> /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/calico-825702/apiserver.crt
	I1126 20:23:56.183243  301922 certs.go:386] copying /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/calico-825702/apiserver.key.331a6116 -> /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/calico-825702/apiserver.key
	I1126 20:23:56.183295  301922 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/calico-825702/proxy-client.key
	I1126 20:23:56.183311  301922 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/calico-825702/proxy-client.crt with IP's: []
	I1126 20:23:56.244960  301922 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/calico-825702/proxy-client.crt ...
	I1126 20:23:56.244983  301922 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/calico-825702/proxy-client.crt: {Name:mkc8d52691e96094e07a154dd1027b02c31d9b71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:23:56.245123  301922 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/calico-825702/proxy-client.key ...
	I1126 20:23:56.245133  301922 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/calico-825702/proxy-client.key: {Name:mk62cfb98ae6397917e957e1780d72e3effaf42f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:23:56.245294  301922 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/14258.pem (1338 bytes)
	W1126 20:23:56.245329  301922 certs.go:480] ignoring /home/jenkins/minikube-integration/21974-10722/.minikube/certs/14258_empty.pem, impossibly tiny 0 bytes
	I1126 20:23:56.245341  301922 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem (1679 bytes)
	I1126 20:23:56.245364  301922 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem (1078 bytes)
	I1126 20:23:56.245389  301922 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem (1123 bytes)
	I1126 20:23:56.245411  301922 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem (1675 bytes)
	I1126 20:23:56.245450  301922 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem (1708 bytes)
	I1126 20:23:56.245993  301922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1126 20:23:56.263488  301922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1126 20:23:56.280301  301922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1126 20:23:56.296600  301922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1126 20:23:56.312923  301922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/calico-825702/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1126 20:23:56.328762  301922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/calico-825702/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1126 20:23:56.344977  301922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/calico-825702/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1126 20:23:56.360896  301922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/calico-825702/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1126 20:23:56.376706  301922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem --> /usr/share/ca-certificates/142582.pem (1708 bytes)
	I1126 20:23:56.394705  301922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1126 20:23:56.410740  301922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/certs/14258.pem --> /usr/share/ca-certificates/14258.pem (1338 bytes)
	I1126 20:23:56.427214  301922 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1126 20:23:56.438949  301922 ssh_runner.go:195] Run: openssl version
	I1126 20:23:56.444826  301922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1126 20:23:56.452556  301922 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:23:56.456126  301922 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 26 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:23:56.456163  301922 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:23:56.489913  301922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1126 20:23:56.498033  301922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14258.pem && ln -fs /usr/share/ca-certificates/14258.pem /etc/ssl/certs/14258.pem"
	I1126 20:23:56.505965  301922 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14258.pem
	I1126 20:23:56.509363  301922 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 26 19:41 /usr/share/ca-certificates/14258.pem
	I1126 20:23:56.509404  301922 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14258.pem
	I1126 20:23:56.544986  301922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14258.pem /etc/ssl/certs/51391683.0"
	I1126 20:23:56.552961  301922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142582.pem && ln -fs /usr/share/ca-certificates/142582.pem /etc/ssl/certs/142582.pem"
	I1126 20:23:56.561199  301922 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142582.pem
	I1126 20:23:56.564984  301922 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 26 19:41 /usr/share/ca-certificates/142582.pem
	I1126 20:23:56.565038  301922 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142582.pem
	I1126 20:23:56.598902  301922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142582.pem /etc/ssl/certs/3ec20f2e.0"
	I1126 20:23:56.606916  301922 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1126 20:23:56.610139  301922 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1126 20:23:56.610192  301922 kubeadm.go:401] StartCluster: {Name:calico-825702 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-825702 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:23:56.610273  301922 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 20:23:56.610312  301922 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 20:23:56.637376  301922 cri.go:89] found id: ""
	I1126 20:23:56.637441  301922 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1126 20:23:56.646195  301922 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1126 20:23:56.653587  301922 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1126 20:23:56.653634  301922 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1126 20:23:56.660892  301922 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1126 20:23:56.660909  301922 kubeadm.go:158] found existing configuration files:
	
	I1126 20:23:56.660949  301922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1126 20:23:56.668113  301922 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1126 20:23:56.668156  301922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1126 20:23:56.675959  301922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1126 20:23:56.683363  301922 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1126 20:23:56.683409  301922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1126 20:23:56.691037  301922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1126 20:23:56.699566  301922 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1126 20:23:56.699618  301922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1126 20:23:56.707819  301922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1126 20:23:56.716721  301922 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1126 20:23:56.716766  301922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1126 20:23:56.723756  301922 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1126 20:23:56.779698  301922 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1126 20:23:56.838000  301922 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1126 20:23:56.701302  292013 pod_ready.go:104] pod "coredns-66bc5c9577-tpmmm" is not "Ready", error: <nil>
	W1126 20:23:59.200785  292013 pod_ready.go:104] pod "coredns-66bc5c9577-tpmmm" is not "Ready", error: <nil>
	I1126 20:24:01.200081  292013 pod_ready.go:94] pod "coredns-66bc5c9577-tpmmm" is "Ready"
	I1126 20:24:01.200103  292013 pod_ready.go:86] duration metric: took 39.505126826s for pod "coredns-66bc5c9577-tpmmm" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:24:01.202519  292013 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-178152" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:24:01.206001  292013 pod_ready.go:94] pod "etcd-default-k8s-diff-port-178152" is "Ready"
	I1126 20:24:01.206021  292013 pod_ready.go:86] duration metric: took 3.482053ms for pod "etcd-default-k8s-diff-port-178152" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:24:01.208057  292013 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-178152" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:24:01.211334  292013 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-178152" is "Ready"
	I1126 20:24:01.211354  292013 pod_ready.go:86] duration metric: took 3.278351ms for pod "kube-apiserver-default-k8s-diff-port-178152" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:24:01.212873  292013 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-178152" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:24:01.399534  292013 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-178152" is "Ready"
	I1126 20:24:01.399562  292013 pod_ready.go:86] duration metric: took 186.671342ms for pod "kube-controller-manager-default-k8s-diff-port-178152" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:24:01.600128  292013 pod_ready.go:83] waiting for pod "kube-proxy-vd7fp" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:24:01.999704  292013 pod_ready.go:94] pod "kube-proxy-vd7fp" is "Ready"
	I1126 20:24:01.999736  292013 pod_ready.go:86] duration metric: took 399.560549ms for pod "kube-proxy-vd7fp" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:24:02.199058  292013 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-178152" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:24:02.598494  292013 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-178152" is "Ready"
	I1126 20:24:02.598524  292013 pod_ready.go:86] duration metric: took 399.437453ms for pod "kube-scheduler-default-k8s-diff-port-178152" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:24:02.598549  292013 pod_ready.go:40] duration metric: took 40.90777129s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 20:24:02.652711  292013 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1126 20:24:02.654325  292013 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-178152" cluster and "default" namespace by default
	I1126 20:24:05.560744  299373 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1126 20:24:05.560802  299373 kubeadm.go:319] [preflight] Running pre-flight checks
	I1126 20:24:05.560909  299373 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1126 20:24:05.560966  299373 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1126 20:24:05.560996  299373 kubeadm.go:319] OS: Linux
	I1126 20:24:05.561053  299373 kubeadm.go:319] CGROUPS_CPU: enabled
	I1126 20:24:05.561150  299373 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1126 20:24:05.561228  299373 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1126 20:24:05.561290  299373 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1126 20:24:05.561333  299373 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1126 20:24:05.561384  299373 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1126 20:24:05.561449  299373 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1126 20:24:05.561540  299373 kubeadm.go:319] CGROUPS_IO: enabled
	I1126 20:24:05.561653  299373 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1126 20:24:05.561783  299373 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1126 20:24:05.561926  299373 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1126 20:24:05.562018  299373 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1126 20:24:05.563984  299373 out.go:252]   - Generating certificates and keys ...
	I1126 20:24:05.564074  299373 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1126 20:24:05.564177  299373 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1126 20:24:05.564248  299373 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1126 20:24:05.564294  299373 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1126 20:24:05.564347  299373 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1126 20:24:05.564389  299373 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1126 20:24:05.564432  299373 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1126 20:24:05.564541  299373 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [kindnet-825702 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1126 20:24:05.564585  299373 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1126 20:24:05.564702  299373 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [kindnet-825702 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1126 20:24:05.564805  299373 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1126 20:24:05.564921  299373 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1126 20:24:05.564988  299373 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1126 20:24:05.565063  299373 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1126 20:24:05.565145  299373 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1126 20:24:05.565216  299373 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1126 20:24:05.565289  299373 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1126 20:24:05.565397  299373 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1126 20:24:05.565443  299373 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1126 20:24:05.565585  299373 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1126 20:24:05.565682  299373 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1126 20:24:05.566779  299373 out.go:252]   - Booting up control plane ...
	I1126 20:24:05.566904  299373 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1126 20:24:05.566981  299373 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1126 20:24:05.567055  299373 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1126 20:24:05.567174  299373 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1126 20:24:05.567298  299373 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1126 20:24:05.567476  299373 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1126 20:24:05.567593  299373 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1126 20:24:05.567641  299373 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1126 20:24:05.567821  299373 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1126 20:24:05.567992  299373 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1126 20:24:05.568079  299373 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.00138774s
	I1126 20:24:05.568195  299373 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1126 20:24:05.568309  299373 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1126 20:24:05.568438  299373 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1126 20:24:05.568584  299373 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1126 20:24:05.568656  299373 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.468919721s
	I1126 20:24:05.568714  299373 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.276974193s
	I1126 20:24:05.568780  299373 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001757807s
	I1126 20:24:05.568875  299373 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1126 20:24:05.568992  299373 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1126 20:24:05.569052  299373 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1126 20:24:05.569292  299373 kubeadm.go:319] [mark-control-plane] Marking the node kindnet-825702 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1126 20:24:05.569372  299373 kubeadm.go:319] [bootstrap-token] Using token: 8fbp9l.w4uvcyj7kukg5ymm
	I1126 20:24:05.570505  299373 out.go:252]   - Configuring RBAC rules ...
	I1126 20:24:05.570598  299373 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1126 20:24:05.570691  299373 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1126 20:24:05.570870  299373 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1126 20:24:05.571032  299373 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1126 20:24:05.571199  299373 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1126 20:24:05.571285  299373 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1126 20:24:05.571387  299373 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1126 20:24:05.571432  299373 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1126 20:24:05.571485  299373 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1126 20:24:05.571491  299373 kubeadm.go:319] 
	I1126 20:24:05.571537  299373 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1126 20:24:05.571543  299373 kubeadm.go:319] 
	I1126 20:24:05.571607  299373 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1126 20:24:05.571618  299373 kubeadm.go:319] 
	I1126 20:24:05.571642  299373 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1126 20:24:05.571704  299373 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1126 20:24:05.571751  299373 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1126 20:24:05.571757  299373 kubeadm.go:319] 
	I1126 20:24:05.571803  299373 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1126 20:24:05.571808  299373 kubeadm.go:319] 
	I1126 20:24:05.571844  299373 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1126 20:24:05.571850  299373 kubeadm.go:319] 
	I1126 20:24:05.571907  299373 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1126 20:24:05.571977  299373 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1126 20:24:05.572070  299373 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1126 20:24:05.572088  299373 kubeadm.go:319] 
	I1126 20:24:05.572191  299373 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1126 20:24:05.572299  299373 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1126 20:24:05.572311  299373 kubeadm.go:319] 
	I1126 20:24:05.572411  299373 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 8fbp9l.w4uvcyj7kukg5ymm \
	I1126 20:24:05.572537  299373 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:9cd90c79a54686c211eb288f3e905de978807b759c474caa965189d211d6551b \
	I1126 20:24:05.572565  299373 kubeadm.go:319] 	--control-plane 
	I1126 20:24:05.572572  299373 kubeadm.go:319] 
	I1126 20:24:05.572655  299373 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1126 20:24:05.572661  299373 kubeadm.go:319] 
	I1126 20:24:05.572724  299373 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 8fbp9l.w4uvcyj7kukg5ymm \
	I1126 20:24:05.572825  299373 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:9cd90c79a54686c211eb288f3e905de978807b759c474caa965189d211d6551b 
	I1126 20:24:05.572839  299373 cni.go:84] Creating CNI manager for "kindnet"
	I1126 20:24:05.574695  299373 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1126 20:24:06.105903  301922 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1126 20:24:06.105969  301922 kubeadm.go:319] [preflight] Running pre-flight checks
	I1126 20:24:06.106080  301922 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1126 20:24:06.106153  301922 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1126 20:24:06.106199  301922 kubeadm.go:319] OS: Linux
	I1126 20:24:06.106256  301922 kubeadm.go:319] CGROUPS_CPU: enabled
	I1126 20:24:06.106316  301922 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1126 20:24:06.106379  301922 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1126 20:24:06.106438  301922 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1126 20:24:06.106600  301922 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1126 20:24:06.106682  301922 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1126 20:24:06.106752  301922 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1126 20:24:06.106804  301922 kubeadm.go:319] CGROUPS_IO: enabled
	I1126 20:24:06.106933  301922 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1126 20:24:06.107078  301922 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1126 20:24:06.107223  301922 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1126 20:24:06.107339  301922 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1126 20:24:06.109423  301922 out.go:252]   - Generating certificates and keys ...
	I1126 20:24:06.109642  301922 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1126 20:24:06.109957  301922 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1126 20:24:06.110091  301922 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1126 20:24:06.110216  301922 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1126 20:24:06.110304  301922 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1126 20:24:06.110373  301922 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1126 20:24:06.110447  301922 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1126 20:24:06.110634  301922 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [calico-825702 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1126 20:24:06.110711  301922 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1126 20:24:06.110897  301922 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [calico-825702 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1126 20:24:06.111002  301922 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1126 20:24:06.111093  301922 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1126 20:24:06.111164  301922 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1126 20:24:06.111284  301922 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1126 20:24:06.111386  301922 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1126 20:24:06.111488  301922 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1126 20:24:06.111569  301922 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1126 20:24:06.111670  301922 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1126 20:24:06.111759  301922 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1126 20:24:06.111888  301922 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1126 20:24:06.111989  301922 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1126 20:24:06.114082  301922 out.go:252]   - Booting up control plane ...
	I1126 20:24:06.114212  301922 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1126 20:24:06.114345  301922 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1126 20:24:06.114434  301922 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1126 20:24:06.114935  301922 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1126 20:24:06.115057  301922 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1126 20:24:06.115165  301922 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1126 20:24:06.115272  301922 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1126 20:24:06.115327  301922 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1126 20:24:06.115535  301922 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1126 20:24:06.115695  301922 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1126 20:24:06.115783  301922 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001310189s
	I1126 20:24:06.115915  301922 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1126 20:24:06.116028  301922 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1126 20:24:06.116161  301922 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1126 20:24:06.116282  301922 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1126 20:24:06.116376  301922 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.44254188s
	I1126 20:24:06.116565  301922 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.017919921s
	I1126 20:24:06.116655  301922 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501308517s
	I1126 20:24:06.116792  301922 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1126 20:24:06.116967  301922 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1126 20:24:06.117053  301922 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1126 20:24:06.117301  301922 kubeadm.go:319] [mark-control-plane] Marking the node calico-825702 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1126 20:24:06.117379  301922 kubeadm.go:319] [bootstrap-token] Using token: nxelx6.dmd09ypn6rh0xqme
	I1126 20:24:06.118781  301922 out.go:252]   - Configuring RBAC rules ...
	I1126 20:24:06.118904  301922 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1126 20:24:06.119004  301922 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1126 20:24:06.119188  301922 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1126 20:24:06.119345  301922 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1126 20:24:06.119528  301922 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1126 20:24:06.119668  301922 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1126 20:24:06.119883  301922 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1126 20:24:06.119940  301922 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1126 20:24:06.119992  301922 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1126 20:24:06.119998  301922 kubeadm.go:319] 
	I1126 20:24:06.120066  301922 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1126 20:24:06.120072  301922 kubeadm.go:319] 
	I1126 20:24:06.120221  301922 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1126 20:24:06.120236  301922 kubeadm.go:319] 
	I1126 20:24:06.120265  301922 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1126 20:24:06.120358  301922 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1126 20:24:06.120496  301922 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1126 20:24:06.120511  301922 kubeadm.go:319] 
	I1126 20:24:06.120574  301922 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1126 20:24:06.120579  301922 kubeadm.go:319] 
	I1126 20:24:06.120632  301922 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1126 20:24:06.120636  301922 kubeadm.go:319] 
	I1126 20:24:06.120699  301922 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1126 20:24:06.120785  301922 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1126 20:24:06.120872  301922 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1126 20:24:06.120878  301922 kubeadm.go:319] 
	I1126 20:24:06.120983  301922 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1126 20:24:06.121079  301922 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1126 20:24:06.121084  301922 kubeadm.go:319] 
	I1126 20:24:06.121190  301922 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token nxelx6.dmd09ypn6rh0xqme \
	I1126 20:24:06.121315  301922 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:9cd90c79a54686c211eb288f3e905de978807b759c474caa965189d211d6551b \
	I1126 20:24:06.121341  301922 kubeadm.go:319] 	--control-plane 
	I1126 20:24:06.121344  301922 kubeadm.go:319] 
	I1126 20:24:06.121431  301922 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1126 20:24:06.121441  301922 kubeadm.go:319] 
	I1126 20:24:06.121547  301922 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token nxelx6.dmd09ypn6rh0xqme \
	I1126 20:24:06.121695  301922 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:9cd90c79a54686c211eb288f3e905de978807b759c474caa965189d211d6551b 
	I1126 20:24:06.121714  301922 cni.go:84] Creating CNI manager for "calico"
	I1126 20:24:06.122970  301922 out.go:179] * Configuring Calico (Container Networking Interface) ...
	I1126 20:24:06.124687  301922 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1126 20:24:06.124705  301922 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (329943 bytes)
	I1126 20:24:06.139843  301922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1126 20:24:06.955846  301922 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1126 20:24:06.955918  301922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:24:06.955948  301922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-825702 minikube.k8s.io/updated_at=2025_11_26T20_24_06_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970 minikube.k8s.io/name=calico-825702 minikube.k8s.io/primary=true
	I1126 20:24:07.042106  301922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:24:07.042215  301922 ops.go:34] apiserver oom_adj: -16
	I1126 20:24:05.575660  299373 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1126 20:24:05.580097  299373 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1126 20:24:05.580115  299373 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1126 20:24:05.593510  299373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1126 20:24:05.826807  299373 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1126 20:24:05.826855  299373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:24:05.826910  299373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-825702 minikube.k8s.io/updated_at=2025_11_26T20_24_05_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970 minikube.k8s.io/name=kindnet-825702 minikube.k8s.io/primary=true
	I1126 20:24:05.837100  299373 ops.go:34] apiserver oom_adj: -16
	I1126 20:24:05.910995  299373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:24:06.411222  299373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:24:06.911861  299373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:24:07.411795  299373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:24:07.911186  299373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:24:08.411346  299373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:24:08.911270  299373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:24:09.411603  299373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:24:09.911254  299373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:24:10.411633  299373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:24:10.493576  299373 kubeadm.go:1114] duration metric: took 4.666766806s to wait for elevateKubeSystemPrivileges
	I1126 20:24:10.493616  299373 kubeadm.go:403] duration metric: took 17.290775151s to StartCluster
	I1126 20:24:10.493636  299373 settings.go:142] acquiring lock: {Name:mkeab3f1dbcfb88a16fda32bff13c520a9a811ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:24:10.493702  299373 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21974-10722/kubeconfig
	I1126 20:24:10.495453  299373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/kubeconfig: {Name:mk6fc897a3190afd853d8f239c80394df10dbd39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:24:10.495742  299373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1126 20:24:10.495744  299373 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1126 20:24:10.495824  299373 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1126 20:24:10.495932  299373 addons.go:70] Setting storage-provisioner=true in profile "kindnet-825702"
	I1126 20:24:10.495950  299373 config.go:182] Loaded profile config "kindnet-825702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:24:10.495949  299373 addons.go:70] Setting default-storageclass=true in profile "kindnet-825702"
	I1126 20:24:10.495979  299373 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kindnet-825702"
	I1126 20:24:10.495954  299373 addons.go:239] Setting addon storage-provisioner=true in "kindnet-825702"
	I1126 20:24:10.496123  299373 host.go:66] Checking if "kindnet-825702" exists ...
	I1126 20:24:10.496375  299373 cli_runner.go:164] Run: docker container inspect kindnet-825702 --format={{.State.Status}}
	I1126 20:24:10.496707  299373 cli_runner.go:164] Run: docker container inspect kindnet-825702 --format={{.State.Status}}
	I1126 20:24:10.498481  299373 out.go:179] * Verifying Kubernetes components...
	I1126 20:24:10.499688  299373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:24:10.529336  299373 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1126 20:24:10.530732  299373 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:24:10.530754  299373 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1126 20:24:10.530810  299373 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-825702
	I1126 20:24:10.535265  299373 addons.go:239] Setting addon default-storageclass=true in "kindnet-825702"
	I1126 20:24:10.535312  299373 host.go:66] Checking if "kindnet-825702" exists ...
	I1126 20:24:10.535786  299373 cli_runner.go:164] Run: docker container inspect kindnet-825702 --format={{.State.Status}}
	I1126 20:24:10.574363  299373 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/kindnet-825702/id_rsa Username:docker}
	I1126 20:24:10.574983  299373 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1126 20:24:10.574998  299373 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1126 20:24:10.575059  299373 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-825702
	I1126 20:24:10.599532  299373 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/kindnet-825702/id_rsa Username:docker}
	I1126 20:24:10.639906  299373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1126 20:24:10.669584  299373 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:24:10.699641  299373 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:24:10.725550  299373 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1126 20:24:10.835683  299373 node_ready.go:35] waiting up to 15m0s for node "kindnet-825702" to be "Ready" ...
	I1126 20:24:10.836182  299373 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1126 20:24:11.050598  299373 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1126 20:24:07.543158  301922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:24:08.043066  301922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:24:08.542666  301922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:24:09.042574  301922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:24:09.542680  301922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:24:10.042319  301922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:24:10.543216  301922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:24:11.042681  301922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:24:11.543188  301922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:24:11.624033  301922 kubeadm.go:1114] duration metric: took 4.668175134s to wait for elevateKubeSystemPrivileges
	I1126 20:24:11.624071  301922 kubeadm.go:403] duration metric: took 15.01388346s to StartCluster
	I1126 20:24:11.624096  301922 settings.go:142] acquiring lock: {Name:mkeab3f1dbcfb88a16fda32bff13c520a9a811ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:24:11.624160  301922 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21974-10722/kubeconfig
	I1126 20:24:11.626192  301922 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/kubeconfig: {Name:mk6fc897a3190afd853d8f239c80394df10dbd39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:24:11.626442  301922 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1126 20:24:11.626651  301922 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1126 20:24:11.626903  301922 config.go:182] Loaded profile config "calico-825702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:24:11.626880  301922 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1126 20:24:11.626987  301922 addons.go:70] Setting storage-provisioner=true in profile "calico-825702"
	I1126 20:24:11.627027  301922 addons.go:239] Setting addon storage-provisioner=true in "calico-825702"
	I1126 20:24:11.627039  301922 addons.go:70] Setting default-storageclass=true in profile "calico-825702"
	I1126 20:24:11.627087  301922 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "calico-825702"
	I1126 20:24:11.627058  301922 host.go:66] Checking if "calico-825702" exists ...
	I1126 20:24:11.627451  301922 cli_runner.go:164] Run: docker container inspect calico-825702 --format={{.State.Status}}
	I1126 20:24:11.627638  301922 cli_runner.go:164] Run: docker container inspect calico-825702 --format={{.State.Status}}
	I1126 20:24:11.629622  301922 out.go:179] * Verifying Kubernetes components...
	I1126 20:24:11.631032  301922 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:24:11.661644  301922 addons.go:239] Setting addon default-storageclass=true in "calico-825702"
	I1126 20:24:11.661694  301922 host.go:66] Checking if "calico-825702" exists ...
	I1126 20:24:11.662164  301922 cli_runner.go:164] Run: docker container inspect calico-825702 --format={{.State.Status}}
	I1126 20:24:11.666864  301922 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1126 20:24:11.669519  301922 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:24:11.670544  301922 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1126 20:24:11.670685  301922 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-825702
	I1126 20:24:11.702767  301922 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1126 20:24:11.702787  301922 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1126 20:24:11.702859  301922 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-825702
	I1126 20:24:11.706879  301922 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/calico-825702/id_rsa Username:docker}
	I1126 20:24:11.723875  301922 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/calico-825702/id_rsa Username:docker}
	I1126 20:24:11.752254  301922 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1126 20:24:11.794332  301922 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:24:11.824148  301922 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:24:11.843061  301922 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1126 20:24:11.946968  301922 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1126 20:24:11.948471  301922 node_ready.go:35] waiting up to 15m0s for node "calico-825702" to be "Ready" ...
	I1126 20:24:12.168202  301922 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1126 20:24:12.169286  301922 addons.go:530] duration metric: took 542.404084ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1126 20:24:11.051711  299373 addons.go:530] duration metric: took 555.889438ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1126 20:24:11.342411  299373 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kindnet-825702" context rescaled to 1 replicas
	W1126 20:24:12.839283  299373 node_ready.go:57] node "kindnet-825702" has "Ready":"False" status (will retry)
	W1126 20:24:14.845082  299373 node_ready.go:57] node "kindnet-825702" has "Ready":"False" status (will retry)
	I1126 20:24:12.451657  301922 kapi.go:214] "coredns" deployment in "kube-system" namespace and "calico-825702" context rescaled to 1 replicas
	W1126 20:24:13.951968  301922 node_ready.go:57] node "calico-825702" has "Ready":"False" status (will retry)
	W1126 20:24:15.952159  301922 node_ready.go:57] node "calico-825702" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 26 20:24:01 default-k8s-diff-port-178152 crio[569]: time="2025-11-26T20:24:01.734002126Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 26 20:24:01 default-k8s-diff-port-178152 crio[569]: time="2025-11-26T20:24:01.734027118Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 26 20:24:01 default-k8s-diff-port-178152 crio[569]: time="2025-11-26T20:24:01.734049306Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 26 20:24:01 default-k8s-diff-port-178152 crio[569]: time="2025-11-26T20:24:01.738343928Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 26 20:24:01 default-k8s-diff-port-178152 crio[569]: time="2025-11-26T20:24:01.738365507Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 26 20:24:01 default-k8s-diff-port-178152 crio[569]: time="2025-11-26T20:24:01.738384673Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 26 20:24:01 default-k8s-diff-port-178152 crio[569]: time="2025-11-26T20:24:01.742668052Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 26 20:24:01 default-k8s-diff-port-178152 crio[569]: time="2025-11-26T20:24:01.742693087Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 26 20:24:01 default-k8s-diff-port-178152 crio[569]: time="2025-11-26T20:24:01.742713899Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 26 20:24:01 default-k8s-diff-port-178152 crio[569]: time="2025-11-26T20:24:01.747656039Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 26 20:24:01 default-k8s-diff-port-178152 crio[569]: time="2025-11-26T20:24:01.747678488Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 26 20:24:01 default-k8s-diff-port-178152 crio[569]: time="2025-11-26T20:24:01.747695681Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 26 20:24:01 default-k8s-diff-port-178152 crio[569]: time="2025-11-26T20:24:01.752275393Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 26 20:24:01 default-k8s-diff-port-178152 crio[569]: time="2025-11-26T20:24:01.752296685Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 26 20:24:07 default-k8s-diff-port-178152 crio[569]: time="2025-11-26T20:24:07.994263516Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=0cbbb5e5-30ee-4235-8723-1853af081aa0 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:24:07 default-k8s-diff-port-178152 crio[569]: time="2025-11-26T20:24:07.995114905Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b20c75e5-bb7d-4397-ae70-47e4455a76f2 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:24:07 default-k8s-diff-port-178152 crio[569]: time="2025-11-26T20:24:07.996192209Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lsm8j/dashboard-metrics-scraper" id=0d621099-8e78-4968-a2d0-5e7e22110b5f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:24:07 default-k8s-diff-port-178152 crio[569]: time="2025-11-26T20:24:07.996320674Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:24:08 default-k8s-diff-port-178152 crio[569]: time="2025-11-26T20:24:08.002876564Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:24:08 default-k8s-diff-port-178152 crio[569]: time="2025-11-26T20:24:08.003553783Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:24:08 default-k8s-diff-port-178152 crio[569]: time="2025-11-26T20:24:08.034431219Z" level=info msg="Created container 32fb436af28ae1c55272f8ad63af213d1dbd9affbdb2dbe7945dedf2c076fc21: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lsm8j/dashboard-metrics-scraper" id=0d621099-8e78-4968-a2d0-5e7e22110b5f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:24:08 default-k8s-diff-port-178152 crio[569]: time="2025-11-26T20:24:08.035216498Z" level=info msg="Starting container: 32fb436af28ae1c55272f8ad63af213d1dbd9affbdb2dbe7945dedf2c076fc21" id=3477fe42-9872-4eef-a710-e0460b695ff0 name=/runtime.v1.RuntimeService/StartContainer
	Nov 26 20:24:08 default-k8s-diff-port-178152 crio[569]: time="2025-11-26T20:24:08.037221429Z" level=info msg="Started container" PID=1825 containerID=32fb436af28ae1c55272f8ad63af213d1dbd9affbdb2dbe7945dedf2c076fc21 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lsm8j/dashboard-metrics-scraper id=3477fe42-9872-4eef-a710-e0460b695ff0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=017ef0ec5922b7ad11bf31114bb4c491873f46267549097b30bafd19b0cd4886
	Nov 26 20:24:08 default-k8s-diff-port-178152 crio[569]: time="2025-11-26T20:24:08.161315256Z" level=info msg="Removing container: 73af847d2ee3aeea34d4b065604680772a964119e5745198841fdf44c04ac818" id=c03688e1-243f-42b7-bea4-3faf224c389a name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 26 20:24:08 default-k8s-diff-port-178152 crio[569]: time="2025-11-26T20:24:08.170831185Z" level=info msg="Removed container 73af847d2ee3aeea34d4b065604680772a964119e5745198841fdf44c04ac818: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lsm8j/dashboard-metrics-scraper" id=c03688e1-243f-42b7-bea4-3faf224c389a name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	32fb436af28ae       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           9 seconds ago       Exited              dashboard-metrics-scraper   3                   017ef0ec5922b       dashboard-metrics-scraper-6ffb444bf9-lsm8j             kubernetes-dashboard
	4a7a55586fdac       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           25 seconds ago      Running             storage-provisioner         1                   1c32bc080f106       storage-provisioner                                    kube-system
	a2ca658bde7be       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   49 seconds ago      Running             kubernetes-dashboard        0                   888289f9667b9       kubernetes-dashboard-855c9754f9-m2nr4                  kubernetes-dashboard
	b5b54b11bd45b       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           56 seconds ago      Running             coredns                     0                   9504e470cabe5       coredns-66bc5c9577-tpmmm                               kube-system
	5f9228ddca102       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           56 seconds ago      Running             kindnet-cni                 0                   2f7e294ddcea4       kindnet-bmzz2                                          kube-system
	ca65c52a7e15d       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           56 seconds ago      Running             kube-proxy                  0                   9c19c36e89c69       kube-proxy-vd7fp                                       kube-system
	3656d4f58204b       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           56 seconds ago      Running             busybox                     1                   11b80ea4b81b1       busybox                                                default
	e8ac49bc740f4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           56 seconds ago      Exited              storage-provisioner         0                   1c32bc080f106       storage-provisioner                                    kube-system
	851ab28993a8b       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           59 seconds ago      Running             kube-controller-manager     0                   930252c9e9174       kube-controller-manager-default-k8s-diff-port-178152   kube-system
	cd9d1e4467356       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           59 seconds ago      Running             kube-apiserver              0                   bfc3619e67336       kube-apiserver-default-k8s-diff-port-178152            kube-system
	45aa87e14b73b       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           59 seconds ago      Running             etcd                        0                   25f8fb31ba381       etcd-default-k8s-diff-port-178152                      kube-system
	53d64e031f1a8       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           59 seconds ago      Running             kube-scheduler              0                   a2c21fb760357       kube-scheduler-default-k8s-diff-port-178152            kube-system
	
	
	==> coredns [b5b54b11bd45b447bfaecefe94487f516b269bf58598eb7dcfa18af5edc1612e] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60584 - 19012 "HINFO IN 8096409391269669939.2220905388687540780. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.460755858s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-178152
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-178152
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970
	                    minikube.k8s.io/name=default-k8s-diff-port-178152
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_26T20_22_26_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 26 Nov 2025 20:22:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-178152
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 26 Nov 2025 20:24:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 26 Nov 2025 20:24:11 +0000   Wed, 26 Nov 2025 20:22:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 26 Nov 2025 20:24:11 +0000   Wed, 26 Nov 2025 20:22:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 26 Nov 2025 20:24:11 +0000   Wed, 26 Nov 2025 20:22:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 26 Nov 2025 20:24:11 +0000   Wed, 26 Nov 2025 20:22:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-178152
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                d91795ef-51fb-4835-abf4-4b138b22a490
	  Boot ID:                    4ab83601-3d95-4490-96bc-14b416ba2714
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-tpmmm                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     107s
	  kube-system                 etcd-default-k8s-diff-port-178152                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         113s
	  kube-system                 kindnet-bmzz2                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-default-k8s-diff-port-178152             250m (3%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-178152    200m (2%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-proxy-vd7fp                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-default-k8s-diff-port-178152             100m (1%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-lsm8j              0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-m2nr4                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 106s               kube-proxy       
	  Normal  Starting                 56s                kube-proxy       
	  Normal  NodeHasSufficientMemory  113s               kubelet          Node default-k8s-diff-port-178152 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    113s               kubelet          Node default-k8s-diff-port-178152 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     113s               kubelet          Node default-k8s-diff-port-178152 status is now: NodeHasSufficientPID
	  Normal  Starting                 113s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           109s               node-controller  Node default-k8s-diff-port-178152 event: Registered Node default-k8s-diff-port-178152 in Controller
	  Normal  NodeReady                96s                kubelet          Node default-k8s-diff-port-178152 status is now: NodeReady
	  Normal  Starting                 61s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  60s (x8 over 61s)  kubelet          Node default-k8s-diff-port-178152 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x8 over 61s)  kubelet          Node default-k8s-diff-port-178152 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x8 over 61s)  kubelet          Node default-k8s-diff-port-178152 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           55s                node-controller  Node default-k8s-diff-port-178152 event: Registered Node default-k8s-diff-port-178152 in Controller
	
	
	==> dmesg <==
	[  +0.091832] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023778] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.010162] kauditd_printk_skb: 47 callbacks suppressed
	[Nov26 19:37] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.011177] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.022896] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.024869] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.022915] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.023864] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +2.047793] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +4.032568] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +8.126198] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[ +16.382331] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[Nov26 19:38] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	
	
	==> etcd [45aa87e14b73bdfe289340af54adbd31ea4129fb3bbd2a635ed0f95587efea73] <==
	{"level":"warn","ts":"2025-11-26T20:23:19.363522Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:23:19.372808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:23:19.380682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:23:19.386744Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:23:19.415575Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:23:19.423419Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:23:19.434006Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:23:19.444992Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:23:19.450164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:23:19.458899Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:23:19.468540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:23:19.478966Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:23:19.492370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:23:19.508690Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:23:19.517180Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:23:19.528919Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:23:19.535979Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:23:19.542434Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:23:19.604121Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45164","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-26T20:23:51.079035Z","caller":"traceutil/trace.go:172","msg":"trace[2022983347] transaction","detail":"{read_only:false; response_revision:610; number_of_response:1; }","duration":"106.373018ms","start":"2025-11-26T20:23:50.972643Z","end":"2025-11-26T20:23:51.079016Z","steps":["trace[2022983347] 'process raft request'  (duration: 106.251042ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-26T20:23:51.274721Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"122.870037ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-26T20:23:51.274804Z","caller":"traceutil/trace.go:172","msg":"trace[598037595] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:611; }","duration":"122.962634ms","start":"2025-11-26T20:23:51.151824Z","end":"2025-11-26T20:23:51.274787Z","steps":["trace[598037595] 'agreement among raft nodes before linearized reading'  (duration: 91.104634ms)","trace[598037595] 'range keys from in-memory index tree'  (duration: 31.734178ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-26T20:23:51.274833Z","caller":"traceutil/trace.go:172","msg":"trace[1194689814] transaction","detail":"{read_only:false; response_revision:612; number_of_response:1; }","duration":"166.970505ms","start":"2025-11-26T20:23:51.107844Z","end":"2025-11-26T20:23:51.274814Z","steps":["trace[1194689814] 'process raft request'  (duration: 135.124521ms)","trace[1194689814] 'compare'  (duration: 31.675879ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-26T20:23:51.645499Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"221.57607ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-26T20:23:51.645571Z","caller":"traceutil/trace.go:172","msg":"trace[1417153719] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:613; }","duration":"221.66038ms","start":"2025-11-26T20:23:51.423897Z","end":"2025-11-26T20:23:51.645557Z","steps":["trace[1417153719] 'range keys from in-memory index tree'  (duration: 221.45153ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:24:18 up  1:06,  0 user,  load average: 10.08, 4.97, 2.79
	Linux default-k8s-diff-port-178152 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5f9228ddca10228c8bcb4f8b1d2167718f80e4d928f6cbae8a8ee24504ece49e] <==
	I1126 20:23:21.518894       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1126 20:23:21.519113       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1126 20:23:21.519268       1 main.go:148] setting mtu 1500 for CNI 
	I1126 20:23:21.519286       1 main.go:178] kindnetd IP family: "ipv4"
	I1126 20:23:21.519298       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-26T20:23:21Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1126 20:23:21.721969       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1126 20:23:21.721995       1 controller.go:381] "Waiting for informer caches to sync"
	I1126 20:23:21.722006       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1126 20:23:21.722117       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1126 20:23:51.723079       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1126 20:23:51.723094       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1126 20:23:51.723079       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1126 20:23:51.723085       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1126 20:23:53.122663       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1126 20:23:53.122693       1 metrics.go:72] Registering metrics
	I1126 20:23:53.122754       1 controller.go:711] "Syncing nftables rules"
	I1126 20:24:01.722413       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1126 20:24:01.722452       1 main.go:301] handling current node
	I1126 20:24:11.722388       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1126 20:24:11.722443       1 main.go:301] handling current node
	
	
	==> kube-apiserver [cd9d1e44673562de24369667807437c6a3c635356a7d55a049742bc674b8d8bb] <==
	I1126 20:23:20.130717       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1126 20:23:20.130724       1 cache.go:39] Caches are synced for autoregister controller
	I1126 20:23:20.130862       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1126 20:23:20.130898       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1126 20:23:20.130910       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1126 20:23:20.131679       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1126 20:23:20.131713       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1126 20:23:20.131853       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1126 20:23:20.131866       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1126 20:23:20.132275       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1126 20:23:20.133300       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1126 20:23:20.153018       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1126 20:23:20.153042       1 policy_source.go:240] refreshing policies
	I1126 20:23:20.189170       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1126 20:23:20.446074       1 controller.go:667] quota admission added evaluator for: namespaces
	I1126 20:23:20.475157       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1126 20:23:20.505488       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1126 20:23:20.511744       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1126 20:23:20.518588       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1126 20:23:20.545797       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.74.101"}
	I1126 20:23:20.554552       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.179.105"}
	I1126 20:23:21.027682       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1126 20:23:24.058889       1 controller.go:667] quota admission added evaluator for: endpoints
	I1126 20:23:24.109362       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1126 20:23:24.161261       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [851ab28993a8b771e5c0a2f66a99e6f22fbce78ae80259d843eb935540e459cb] <==
	I1126 20:23:23.612272       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1126 20:23:23.612309       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1126 20:23:23.612320       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1126 20:23:23.612327       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1126 20:23:23.618603       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1126 20:23:23.621726       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1126 20:23:23.626995       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1126 20:23:23.627012       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1126 20:23:23.627022       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1126 20:23:23.629237       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1126 20:23:23.631524       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1126 20:23:23.632706       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1126 20:23:23.648919       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1126 20:23:23.651068       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1126 20:23:23.652429       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1126 20:23:23.654688       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1126 20:23:23.655887       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1126 20:23:23.655925       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1126 20:23:23.656092       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1126 20:23:23.657582       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1126 20:23:23.662370       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1126 20:23:23.662507       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1126 20:23:23.662616       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-178152"
	I1126 20:23:23.662683       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1126 20:23:23.671687       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [ca65c52a7e15d14f87ed86b2027a3434dde86bb3c0239a70ce563e361f2bf410] <==
	I1126 20:23:21.387573       1 server_linux.go:53] "Using iptables proxy"
	I1126 20:23:21.450349       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1126 20:23:21.550508       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1126 20:23:21.550540       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1126 20:23:21.550614       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1126 20:23:21.569528       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1126 20:23:21.569578       1 server_linux.go:132] "Using iptables Proxier"
	I1126 20:23:21.574472       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1126 20:23:21.574853       1 server.go:527] "Version info" version="v1.34.1"
	I1126 20:23:21.574877       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 20:23:21.577572       1 config.go:106] "Starting endpoint slice config controller"
	I1126 20:23:21.577613       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1126 20:23:21.577649       1 config.go:200] "Starting service config controller"
	I1126 20:23:21.577656       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1126 20:23:21.577581       1 config.go:403] "Starting serviceCIDR config controller"
	I1126 20:23:21.577897       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1126 20:23:21.577962       1 config.go:309] "Starting node config controller"
	I1126 20:23:21.577969       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1126 20:23:21.678758       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1126 20:23:21.678888       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1126 20:23:21.678910       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1126 20:23:21.678919       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [53d64e031f1a8bdf5d5b01443fbdf38f958b0ba7ea8c6514663f714eb5cedebf] <==
	I1126 20:23:19.410368       1 serving.go:386] Generated self-signed cert in-memory
	I1126 20:23:20.352851       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1126 20:23:20.352878       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 20:23:20.357837       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1126 20:23:20.357930       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1126 20:23:20.357845       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1126 20:23:20.357976       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1126 20:23:20.357839       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1126 20:23:20.358013       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1126 20:23:20.358277       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1126 20:23:20.358310       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1126 20:23:20.458531       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1126 20:23:20.458575       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1126 20:23:20.458529       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 26 20:23:29 default-k8s-diff-port-178152 kubelet[731]: I1126 20:23:29.055580     731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-m2nr4" podStartSLOduration=1.256444008 podStartE2EDuration="5.055557549s" podCreationTimestamp="2025-11-26 20:23:24 +0000 UTC" firstStartedPulling="2025-11-26 20:23:24.564365409 +0000 UTC m=+6.664085249" lastFinishedPulling="2025-11-26 20:23:28.363478926 +0000 UTC m=+10.463198790" observedRunningTime="2025-11-26 20:23:29.055138909 +0000 UTC m=+11.154858770" watchObservedRunningTime="2025-11-26 20:23:29.055557549 +0000 UTC m=+11.155277411"
	Nov 26 20:23:30 default-k8s-diff-port-178152 kubelet[731]: I1126 20:23:30.966918     731 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 26 20:23:31 default-k8s-diff-port-178152 kubelet[731]: I1126 20:23:31.049468     731 scope.go:117] "RemoveContainer" containerID="f9993fb2e1fb3b48476ed7b65f18f08f394cc9edf8167b63f659583295ad63a9"
	Nov 26 20:23:32 default-k8s-diff-port-178152 kubelet[731]: I1126 20:23:32.053764     731 scope.go:117] "RemoveContainer" containerID="f9993fb2e1fb3b48476ed7b65f18f08f394cc9edf8167b63f659583295ad63a9"
	Nov 26 20:23:32 default-k8s-diff-port-178152 kubelet[731]: I1126 20:23:32.053900     731 scope.go:117] "RemoveContainer" containerID="8ee1f2132cf75e0705f7199185208475b96dc6b024bf2675725957b2672db5d6"
	Nov 26 20:23:32 default-k8s-diff-port-178152 kubelet[731]: E1126 20:23:32.054088     731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lsm8j_kubernetes-dashboard(6c175785-959c-450c-9696-0881dcaaf217)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lsm8j" podUID="6c175785-959c-450c-9696-0881dcaaf217"
	Nov 26 20:23:33 default-k8s-diff-port-178152 kubelet[731]: I1126 20:23:33.057274     731 scope.go:117] "RemoveContainer" containerID="8ee1f2132cf75e0705f7199185208475b96dc6b024bf2675725957b2672db5d6"
	Nov 26 20:23:33 default-k8s-diff-port-178152 kubelet[731]: E1126 20:23:33.057482     731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lsm8j_kubernetes-dashboard(6c175785-959c-450c-9696-0881dcaaf217)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lsm8j" podUID="6c175785-959c-450c-9696-0881dcaaf217"
	Nov 26 20:23:34 default-k8s-diff-port-178152 kubelet[731]: I1126 20:23:34.957029     731 scope.go:117] "RemoveContainer" containerID="8ee1f2132cf75e0705f7199185208475b96dc6b024bf2675725957b2672db5d6"
	Nov 26 20:23:34 default-k8s-diff-port-178152 kubelet[731]: E1126 20:23:34.957290     731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lsm8j_kubernetes-dashboard(6c175785-959c-450c-9696-0881dcaaf217)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lsm8j" podUID="6c175785-959c-450c-9696-0881dcaaf217"
	Nov 26 20:23:45 default-k8s-diff-port-178152 kubelet[731]: I1126 20:23:45.994130     731 scope.go:117] "RemoveContainer" containerID="8ee1f2132cf75e0705f7199185208475b96dc6b024bf2675725957b2672db5d6"
	Nov 26 20:23:46 default-k8s-diff-port-178152 kubelet[731]: I1126 20:23:46.091327     731 scope.go:117] "RemoveContainer" containerID="8ee1f2132cf75e0705f7199185208475b96dc6b024bf2675725957b2672db5d6"
	Nov 26 20:23:46 default-k8s-diff-port-178152 kubelet[731]: I1126 20:23:46.091565     731 scope.go:117] "RemoveContainer" containerID="73af847d2ee3aeea34d4b065604680772a964119e5745198841fdf44c04ac818"
	Nov 26 20:23:46 default-k8s-diff-port-178152 kubelet[731]: E1126 20:23:46.091756     731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lsm8j_kubernetes-dashboard(6c175785-959c-450c-9696-0881dcaaf217)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lsm8j" podUID="6c175785-959c-450c-9696-0881dcaaf217"
	Nov 26 20:23:52 default-k8s-diff-port-178152 kubelet[731]: I1126 20:23:52.113765     731 scope.go:117] "RemoveContainer" containerID="e8ac49bc740f47f5f32f520593ad8e80e3fef725e3ab711117dd927d45c20eb3"
	Nov 26 20:23:54 default-k8s-diff-port-178152 kubelet[731]: I1126 20:23:54.956730     731 scope.go:117] "RemoveContainer" containerID="73af847d2ee3aeea34d4b065604680772a964119e5745198841fdf44c04ac818"
	Nov 26 20:23:54 default-k8s-diff-port-178152 kubelet[731]: E1126 20:23:54.956903     731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lsm8j_kubernetes-dashboard(6c175785-959c-450c-9696-0881dcaaf217)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lsm8j" podUID="6c175785-959c-450c-9696-0881dcaaf217"
	Nov 26 20:24:07 default-k8s-diff-port-178152 kubelet[731]: I1126 20:24:07.993893     731 scope.go:117] "RemoveContainer" containerID="73af847d2ee3aeea34d4b065604680772a964119e5745198841fdf44c04ac818"
	Nov 26 20:24:08 default-k8s-diff-port-178152 kubelet[731]: I1126 20:24:08.159372     731 scope.go:117] "RemoveContainer" containerID="73af847d2ee3aeea34d4b065604680772a964119e5745198841fdf44c04ac818"
	Nov 26 20:24:08 default-k8s-diff-port-178152 kubelet[731]: I1126 20:24:08.159625     731 scope.go:117] "RemoveContainer" containerID="32fb436af28ae1c55272f8ad63af213d1dbd9affbdb2dbe7945dedf2c076fc21"
	Nov 26 20:24:08 default-k8s-diff-port-178152 kubelet[731]: E1126 20:24:08.159835     731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lsm8j_kubernetes-dashboard(6c175785-959c-450c-9696-0881dcaaf217)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lsm8j" podUID="6c175785-959c-450c-9696-0881dcaaf217"
	Nov 26 20:24:14 default-k8s-diff-port-178152 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 26 20:24:14 default-k8s-diff-port-178152 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 26 20:24:14 default-k8s-diff-port-178152 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 26 20:24:14 default-k8s-diff-port-178152 systemd[1]: kubelet.service: Consumed 1.748s CPU time.
	
	
	==> kubernetes-dashboard [a2ca658bde7be15ad908300df3903014b37b72b558d8856ce799024e45c11ee0] <==
	2025/11/26 20:23:28 Using namespace: kubernetes-dashboard
	2025/11/26 20:23:28 Using in-cluster config to connect to apiserver
	2025/11/26 20:23:28 Using secret token for csrf signing
	2025/11/26 20:23:28 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/26 20:23:28 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/26 20:23:28 Successful initial request to the apiserver, version: v1.34.1
	2025/11/26 20:23:28 Generating JWE encryption key
	2025/11/26 20:23:28 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/26 20:23:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/26 20:23:28 Initializing JWE encryption key from synchronized object
	2025/11/26 20:23:28 Creating in-cluster Sidecar client
	2025/11/26 20:23:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/26 20:23:28 Serving insecurely on HTTP port: 9090
	2025/11/26 20:23:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/26 20:23:28 Starting overwatch
	
	
	==> storage-provisioner [4a7a55586fdacfafe83753690495f1e47b964f7c04ba8d593fac2ce9f15e9dd8] <==
	I1126 20:23:52.184317       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1126 20:23:52.194789       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1126 20:23:52.194901       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1126 20:23:52.197607       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:23:55.653149       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:23:59.914145       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:24:03.513112       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:24:06.567167       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:24:09.590270       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:24:09.600631       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1126 20:24:09.601016       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1126 20:24:09.601508       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-178152_aa7fe461-e867-4347-a0a6-7207eabe3e57!
	I1126 20:24:09.601649       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0bea3325-c523-4ea4-89b9-0b2d778812eb", APIVersion:"v1", ResourceVersion:"635", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-178152_aa7fe461-e867-4347-a0a6-7207eabe3e57 became leader
	W1126 20:24:09.609171       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:24:09.616259       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1126 20:24:09.702284       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-178152_aa7fe461-e867-4347-a0a6-7207eabe3e57!
	W1126 20:24:11.619273       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:24:11.626066       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:24:13.630010       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:24:13.634009       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:24:15.636906       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:24:15.640546       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:24:17.644076       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:24:17.648825       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [e8ac49bc740f47f5f32f520593ad8e80e3fef725e3ab711117dd927d45c20eb3] <==
	I1126 20:23:21.352988       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1126 20:23:51.356798       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-178152 -n default-k8s-diff-port-178152
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-178152 -n default-k8s-diff-port-178152: exit status 2 (356.952677ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-178152 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-178152
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-178152:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1da700037b3cbba2b479732a8cf72dcd22801db8429cb9a5806c239b30001370",
	        "Created": "2025-11-26T20:22:08.62900996Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 292226,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-26T20:23:11.657604117Z",
	            "FinishedAt": "2025-11-26T20:23:10.784334711Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/1da700037b3cbba2b479732a8cf72dcd22801db8429cb9a5806c239b30001370/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1da700037b3cbba2b479732a8cf72dcd22801db8429cb9a5806c239b30001370/hostname",
	        "HostsPath": "/var/lib/docker/containers/1da700037b3cbba2b479732a8cf72dcd22801db8429cb9a5806c239b30001370/hosts",
	        "LogPath": "/var/lib/docker/containers/1da700037b3cbba2b479732a8cf72dcd22801db8429cb9a5806c239b30001370/1da700037b3cbba2b479732a8cf72dcd22801db8429cb9a5806c239b30001370-json.log",
	        "Name": "/default-k8s-diff-port-178152",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-178152:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-178152",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1da700037b3cbba2b479732a8cf72dcd22801db8429cb9a5806c239b30001370",
	                "LowerDir": "/var/lib/docker/overlay2/1cbf40eef7d3ec80e807f2ef0c1010ea3cdc29bbb3549f4388d400ee4fc8f988-init/diff:/var/lib/docker/overlay2/1661d775281cb7314a654558ee2c4e5880ffafb3a27a085032449cd60a68753a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1cbf40eef7d3ec80e807f2ef0c1010ea3cdc29bbb3549f4388d400ee4fc8f988/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1cbf40eef7d3ec80e807f2ef0c1010ea3cdc29bbb3549f4388d400ee4fc8f988/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1cbf40eef7d3ec80e807f2ef0c1010ea3cdc29bbb3549f4388d400ee4fc8f988/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-178152",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-178152/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-178152",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-178152",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-178152",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "51d6a2d6b828a4389d7418d31f8938c1b70c1c74e08990debe19a35152ddca9c",
	            "SandboxKey": "/var/run/docker/netns/51d6a2d6b828",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-178152": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ec68256d41186ab4784970795756969f4ed3452c84879229e3a4f0a4adc0c9b1",
	                    "EndpointID": "aca567f2caea0d0b708ae3beb594c97b88d938c79b64bfd69c78d96828f66894",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "c2:b7:21:9c:18:45",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-178152",
	                        "1da700037b3c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-178152 -n default-k8s-diff-port-178152
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-178152 -n default-k8s-diff-port-178152: exit status 2 (376.623384ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-178152 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-178152 logs -n 25: (1.579082928s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                     ARGS                                     │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-825702 sudo systemctl cat kubelet --no-pager                         │ auto-825702                  │ jenkins │ v1.37.0 │ 26 Nov 25 20:24 UTC │ 26 Nov 25 20:24 UTC │
	│ ssh     │ -p auto-825702 sudo journalctl -xeu kubelet --all --full --no-pager          │ auto-825702                  │ jenkins │ v1.37.0 │ 26 Nov 25 20:24 UTC │ 26 Nov 25 20:24 UTC │
	│ ssh     │ -p auto-825702 sudo cat /etc/kubernetes/kubelet.conf                         │ auto-825702                  │ jenkins │ v1.37.0 │ 26 Nov 25 20:24 UTC │ 26 Nov 25 20:24 UTC │
	│ ssh     │ -p auto-825702 sudo cat /var/lib/kubelet/config.yaml                         │ auto-825702                  │ jenkins │ v1.37.0 │ 26 Nov 25 20:24 UTC │ 26 Nov 25 20:24 UTC │
	│ ssh     │ -p auto-825702 sudo systemctl status docker --all --full --no-pager          │ auto-825702                  │ jenkins │ v1.37.0 │ 26 Nov 25 20:24 UTC │                     │
	│ ssh     │ -p auto-825702 sudo systemctl cat docker --no-pager                          │ auto-825702                  │ jenkins │ v1.37.0 │ 26 Nov 25 20:24 UTC │ 26 Nov 25 20:24 UTC │
	│ ssh     │ -p auto-825702 sudo cat /etc/docker/daemon.json                              │ auto-825702                  │ jenkins │ v1.37.0 │ 26 Nov 25 20:24 UTC │                     │
	│ ssh     │ -p auto-825702 sudo docker system info                                       │ auto-825702                  │ jenkins │ v1.37.0 │ 26 Nov 25 20:24 UTC │                     │
	│ ssh     │ -p auto-825702 sudo systemctl status cri-docker --all --full --no-pager      │ auto-825702                  │ jenkins │ v1.37.0 │ 26 Nov 25 20:24 UTC │                     │
	│ ssh     │ -p auto-825702 sudo systemctl cat cri-docker --no-pager                      │ auto-825702                  │ jenkins │ v1.37.0 │ 26 Nov 25 20:24 UTC │ 26 Nov 25 20:24 UTC │
	│ ssh     │ -p auto-825702 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ auto-825702                  │ jenkins │ v1.37.0 │ 26 Nov 25 20:24 UTC │                     │
	│ ssh     │ -p auto-825702 sudo cat /usr/lib/systemd/system/cri-docker.service           │ auto-825702                  │ jenkins │ v1.37.0 │ 26 Nov 25 20:24 UTC │ 26 Nov 25 20:24 UTC │
	│ ssh     │ -p auto-825702 sudo cri-dockerd --version                                    │ auto-825702                  │ jenkins │ v1.37.0 │ 26 Nov 25 20:24 UTC │ 26 Nov 25 20:24 UTC │
	│ ssh     │ -p auto-825702 sudo systemctl status containerd --all --full --no-pager      │ auto-825702                  │ jenkins │ v1.37.0 │ 26 Nov 25 20:24 UTC │                     │
	│ ssh     │ -p auto-825702 sudo systemctl cat containerd --no-pager                      │ auto-825702                  │ jenkins │ v1.37.0 │ 26 Nov 25 20:24 UTC │ 26 Nov 25 20:24 UTC │
	│ image   │ default-k8s-diff-port-178152 image list --format=json                        │ default-k8s-diff-port-178152 │ jenkins │ v1.37.0 │ 26 Nov 25 20:24 UTC │ 26 Nov 25 20:24 UTC │
	│ ssh     │ -p auto-825702 sudo cat /lib/systemd/system/containerd.service               │ auto-825702                  │ jenkins │ v1.37.0 │ 26 Nov 25 20:24 UTC │ 26 Nov 25 20:24 UTC │
	│ pause   │ -p default-k8s-diff-port-178152 --alsologtostderr -v=1                       │ default-k8s-diff-port-178152 │ jenkins │ v1.37.0 │ 26 Nov 25 20:24 UTC │                     │
	│ ssh     │ -p auto-825702 sudo cat /etc/containerd/config.toml                          │ auto-825702                  │ jenkins │ v1.37.0 │ 26 Nov 25 20:24 UTC │ 26 Nov 25 20:24 UTC │
	│ ssh     │ -p auto-825702 sudo containerd config dump                                   │ auto-825702                  │ jenkins │ v1.37.0 │ 26 Nov 25 20:24 UTC │ 26 Nov 25 20:24 UTC │
	│ ssh     │ -p auto-825702 sudo systemctl status crio --all --full --no-pager            │ auto-825702                  │ jenkins │ v1.37.0 │ 26 Nov 25 20:24 UTC │ 26 Nov 25 20:24 UTC │
	│ ssh     │ -p auto-825702 sudo systemctl cat crio --no-pager                            │ auto-825702                  │ jenkins │ v1.37.0 │ 26 Nov 25 20:24 UTC │ 26 Nov 25 20:24 UTC │
	│ ssh     │ -p auto-825702 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;  │ auto-825702                  │ jenkins │ v1.37.0 │ 26 Nov 25 20:24 UTC │ 26 Nov 25 20:24 UTC │
	│ ssh     │ -p auto-825702 sudo crio config                                              │ auto-825702                  │ jenkins │ v1.37.0 │ 26 Nov 25 20:24 UTC │ 26 Nov 25 20:24 UTC │
	│ delete  │ -p auto-825702                                                               │ auto-825702                  │ jenkins │ v1.37.0 │ 26 Nov 25 20:24 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/26 20:23:47
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1126 20:23:47.189848  301922 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:23:47.189957  301922 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:23:47.189965  301922 out.go:374] Setting ErrFile to fd 2...
	I1126 20:23:47.189971  301922 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:23:47.190239  301922 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-10722/.minikube/bin
	I1126 20:23:47.190723  301922 out.go:368] Setting JSON to false
	I1126 20:23:47.192077  301922 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3977,"bootTime":1764184650,"procs":444,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1126 20:23:47.192134  301922 start.go:143] virtualization: kvm guest
	I1126 20:23:47.194561  301922 out.go:179] * [calico-825702] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1126 20:23:47.195724  301922 notify.go:221] Checking for updates...
	I1126 20:23:47.195772  301922 out.go:179]   - MINIKUBE_LOCATION=21974
	I1126 20:23:47.197117  301922 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1126 20:23:47.198665  301922 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21974-10722/kubeconfig
	I1126 20:23:47.199967  301922 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-10722/.minikube
	I1126 20:23:47.201285  301922 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1126 20:23:47.202564  301922 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1126 20:23:47.204419  301922 config.go:182] Loaded profile config "auto-825702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:23:47.204573  301922 config.go:182] Loaded profile config "default-k8s-diff-port-178152": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:23:47.204689  301922 config.go:182] Loaded profile config "kindnet-825702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:23:47.204808  301922 driver.go:422] Setting default libvirt URI to qemu:///system
	I1126 20:23:47.229902  301922 docker.go:124] docker version: linux-29.0.4:Docker Engine - Community
	I1126 20:23:47.229971  301922 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:23:47.287928  301922 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-26 20:23:47.277739401 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1126 20:23:47.288076  301922 docker.go:319] overlay module found
	I1126 20:23:47.289832  301922 out.go:179] * Using the docker driver based on user configuration
	I1126 20:23:47.291025  301922 start.go:309] selected driver: docker
	I1126 20:23:47.291039  301922 start.go:927] validating driver "docker" against <nil>
	I1126 20:23:47.291052  301922 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1126 20:23:47.291780  301922 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:23:47.352797  301922 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-26 20:23:47.342766053 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1126 20:23:47.352951  301922 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1126 20:23:47.353144  301922 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1126 20:23:47.354992  301922 out.go:179] * Using Docker driver with root privileges
	I1126 20:23:47.356169  301922 cni.go:84] Creating CNI manager for "calico"
	I1126 20:23:47.356187  301922 start_flags.go:336] Found "Calico" CNI - setting NetworkPlugin=cni
	I1126 20:23:47.356249  301922 start.go:353] cluster config:
	{Name:calico-825702 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-825702 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:23:47.357525  301922 out.go:179] * Starting "calico-825702" primary control-plane node in "calico-825702" cluster
	I1126 20:23:47.358573  301922 cache.go:134] Beginning downloading kic base image for docker with crio
	I1126 20:23:47.359707  301922 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1126 20:23:47.360761  301922 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:23:47.360792  301922 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21974-10722/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1126 20:23:47.360800  301922 cache.go:65] Caching tarball of preloaded images
	I1126 20:23:47.360880  301922 preload.go:238] Found /home/jenkins/minikube-integration/21974-10722/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1126 20:23:47.360893  301922 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1126 20:23:47.360888  301922 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1126 20:23:47.360993  301922 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/calico-825702/config.json ...
	I1126 20:23:47.361017  301922 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/calico-825702/config.json: {Name:mkb261286e1f4d4d01af83fdf3add0b686de212e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:23:47.381692  301922 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1126 20:23:47.381711  301922 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1126 20:23:47.381724  301922 cache.go:243] Successfully downloaded all kic artifacts
	I1126 20:23:47.381746  301922 start.go:360] acquireMachinesLock for calico-825702: {Name:mk2d555972f6c5e77ea8f2b60bfc246817d537f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1126 20:23:47.381838  301922 start.go:364] duration metric: took 78.405µs to acquireMachinesLock for "calico-825702"
	I1126 20:23:47.381860  301922 start.go:93] Provisioning new machine with config: &{Name:calico-825702 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-825702 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1126 20:23:47.381919  301922 start.go:125] createHost starting for "" (driver="docker")
	I1126 20:23:44.508679  290654 node_ready.go:49] node "auto-825702" is "Ready"
	I1126 20:23:44.508755  290654 node_ready.go:38] duration metric: took 11.028306238s for node "auto-825702" to be "Ready" ...
	I1126 20:23:44.508854  290654 api_server.go:52] waiting for apiserver process to appear ...
	I1126 20:23:44.508981  290654 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:23:44.522641  290654 api_server.go:72] duration metric: took 11.332337138s to wait for apiserver process to appear ...
	I1126 20:23:44.522667  290654 api_server.go:88] waiting for apiserver healthz status ...
	I1126 20:23:44.522687  290654 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1126 20:23:44.526748  290654 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1126 20:23:44.527674  290654 api_server.go:141] control plane version: v1.34.1
	I1126 20:23:44.527699  290654 api_server.go:131] duration metric: took 5.024515ms to wait for apiserver health ...
	I1126 20:23:44.527711  290654 system_pods.go:43] waiting for kube-system pods to appear ...
	I1126 20:23:44.697892  290654 system_pods.go:59] 8 kube-system pods found
	I1126 20:23:44.697928  290654 system_pods.go:61] "coredns-66bc5c9577-8lbn9" [9ede6da3-5188-4b17-ad41-9add6a1cc659] Pending
	I1126 20:23:44.697935  290654 system_pods.go:61] "etcd-auto-825702" [5d7b22d0-d28f-4975-a668-0b20f9356f08] Running
	I1126 20:23:44.697940  290654 system_pods.go:61] "kindnet-s68gz" [fd077140-84bb-48cf-9f71-834b22cde6da] Running
	I1126 20:23:44.697945  290654 system_pods.go:61] "kube-apiserver-auto-825702" [61cfca9f-0130-497e-b843-520e347eac4f] Running
	I1126 20:23:44.697951  290654 system_pods.go:61] "kube-controller-manager-auto-825702" [c0f893ee-23e9-40ed-9718-2404eff503b8] Running
	I1126 20:23:44.697955  290654 system_pods.go:61] "kube-proxy-zj978" [c564e4fc-2759-4b29-ad81-85dcd85e1f5d] Running
	I1126 20:23:44.697960  290654 system_pods.go:61] "kube-scheduler-auto-825702" [d708478b-9df4-4e77-ad8e-4c9d748a297a] Running
	I1126 20:23:44.697964  290654 system_pods.go:61] "storage-provisioner" [e6be533e-b1d5-4636-b530-7e8f9cef6c3b] Pending
	I1126 20:23:44.697972  290654 system_pods.go:74] duration metric: took 170.250183ms to wait for pod list to return data ...
	I1126 20:23:44.697980  290654 default_sa.go:34] waiting for default service account to be created ...
	I1126 20:23:44.830743  290654 default_sa.go:45] found service account: "default"
	I1126 20:23:44.830774  290654 default_sa.go:55] duration metric: took 132.786598ms for default service account to be created ...
	I1126 20:23:44.830787  290654 system_pods.go:116] waiting for k8s-apps to be running ...
	I1126 20:23:44.833946  290654 system_pods.go:86] 8 kube-system pods found
	I1126 20:23:44.833980  290654 system_pods.go:89] "coredns-66bc5c9577-8lbn9" [9ede6da3-5188-4b17-ad41-9add6a1cc659] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:23:44.833986  290654 system_pods.go:89] "etcd-auto-825702" [5d7b22d0-d28f-4975-a668-0b20f9356f08] Running
	I1126 20:23:44.833993  290654 system_pods.go:89] "kindnet-s68gz" [fd077140-84bb-48cf-9f71-834b22cde6da] Running
	I1126 20:23:44.833998  290654 system_pods.go:89] "kube-apiserver-auto-825702" [61cfca9f-0130-497e-b843-520e347eac4f] Running
	I1126 20:23:44.834003  290654 system_pods.go:89] "kube-controller-manager-auto-825702" [c0f893ee-23e9-40ed-9718-2404eff503b8] Running
	I1126 20:23:44.834008  290654 system_pods.go:89] "kube-proxy-zj978" [c564e4fc-2759-4b29-ad81-85dcd85e1f5d] Running
	I1126 20:23:44.834013  290654 system_pods.go:89] "kube-scheduler-auto-825702" [d708478b-9df4-4e77-ad8e-4c9d748a297a] Running
	I1126 20:23:44.834018  290654 system_pods.go:89] "storage-provisioner" [e6be533e-b1d5-4636-b530-7e8f9cef6c3b] Pending
	I1126 20:23:44.834040  290654 retry.go:31] will retry after 225.89413ms: missing components: kube-dns
	I1126 20:23:45.063418  290654 system_pods.go:86] 8 kube-system pods found
	I1126 20:23:45.063449  290654 system_pods.go:89] "coredns-66bc5c9577-8lbn9" [9ede6da3-5188-4b17-ad41-9add6a1cc659] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:23:45.063484  290654 system_pods.go:89] "etcd-auto-825702" [5d7b22d0-d28f-4975-a668-0b20f9356f08] Running
	I1126 20:23:45.063492  290654 system_pods.go:89] "kindnet-s68gz" [fd077140-84bb-48cf-9f71-834b22cde6da] Running
	I1126 20:23:45.063498  290654 system_pods.go:89] "kube-apiserver-auto-825702" [61cfca9f-0130-497e-b843-520e347eac4f] Running
	I1126 20:23:45.063509  290654 system_pods.go:89] "kube-controller-manager-auto-825702" [c0f893ee-23e9-40ed-9718-2404eff503b8] Running
	I1126 20:23:45.063518  290654 system_pods.go:89] "kube-proxy-zj978" [c564e4fc-2759-4b29-ad81-85dcd85e1f5d] Running
	I1126 20:23:45.063524  290654 system_pods.go:89] "kube-scheduler-auto-825702" [d708478b-9df4-4e77-ad8e-4c9d748a297a] Running
	I1126 20:23:45.063540  290654 system_pods.go:89] "storage-provisioner" [e6be533e-b1d5-4636-b530-7e8f9cef6c3b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 20:23:45.063559  290654 retry.go:31] will retry after 262.27459ms: missing components: kube-dns
	I1126 20:23:45.387844  290654 system_pods.go:86] 8 kube-system pods found
	I1126 20:23:45.387909  290654 system_pods.go:89] "coredns-66bc5c9577-8lbn9" [9ede6da3-5188-4b17-ad41-9add6a1cc659] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:23:45.387923  290654 system_pods.go:89] "etcd-auto-825702" [5d7b22d0-d28f-4975-a668-0b20f9356f08] Running
	I1126 20:23:45.387931  290654 system_pods.go:89] "kindnet-s68gz" [fd077140-84bb-48cf-9f71-834b22cde6da] Running
	I1126 20:23:45.387937  290654 system_pods.go:89] "kube-apiserver-auto-825702" [61cfca9f-0130-497e-b843-520e347eac4f] Running
	I1126 20:23:45.387942  290654 system_pods.go:89] "kube-controller-manager-auto-825702" [c0f893ee-23e9-40ed-9718-2404eff503b8] Running
	I1126 20:23:45.387951  290654 system_pods.go:89] "kube-proxy-zj978" [c564e4fc-2759-4b29-ad81-85dcd85e1f5d] Running
	I1126 20:23:45.387956  290654 system_pods.go:89] "kube-scheduler-auto-825702" [d708478b-9df4-4e77-ad8e-4c9d748a297a] Running
	I1126 20:23:45.387964  290654 system_pods.go:89] "storage-provisioner" [e6be533e-b1d5-4636-b530-7e8f9cef6c3b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 20:23:45.387980  290654 retry.go:31] will retry after 405.718495ms: missing components: kube-dns
	I1126 20:23:45.797836  290654 system_pods.go:86] 8 kube-system pods found
	I1126 20:23:45.797879  290654 system_pods.go:89] "coredns-66bc5c9577-8lbn9" [9ede6da3-5188-4b17-ad41-9add6a1cc659] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:23:45.797889  290654 system_pods.go:89] "etcd-auto-825702" [5d7b22d0-d28f-4975-a668-0b20f9356f08] Running
	I1126 20:23:45.797895  290654 system_pods.go:89] "kindnet-s68gz" [fd077140-84bb-48cf-9f71-834b22cde6da] Running
	I1126 20:23:45.797901  290654 system_pods.go:89] "kube-apiserver-auto-825702" [61cfca9f-0130-497e-b843-520e347eac4f] Running
	I1126 20:23:45.797907  290654 system_pods.go:89] "kube-controller-manager-auto-825702" [c0f893ee-23e9-40ed-9718-2404eff503b8] Running
	I1126 20:23:45.797915  290654 system_pods.go:89] "kube-proxy-zj978" [c564e4fc-2759-4b29-ad81-85dcd85e1f5d] Running
	I1126 20:23:45.797922  290654 system_pods.go:89] "kube-scheduler-auto-825702" [d708478b-9df4-4e77-ad8e-4c9d748a297a] Running
	I1126 20:23:45.797929  290654 system_pods.go:89] "storage-provisioner" [e6be533e-b1d5-4636-b530-7e8f9cef6c3b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 20:23:45.797946  290654 retry.go:31] will retry after 514.179543ms: missing components: kube-dns
	I1126 20:23:46.316516  290654 system_pods.go:86] 8 kube-system pods found
	I1126 20:23:46.316551  290654 system_pods.go:89] "coredns-66bc5c9577-8lbn9" [9ede6da3-5188-4b17-ad41-9add6a1cc659] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:23:46.316559  290654 system_pods.go:89] "etcd-auto-825702" [5d7b22d0-d28f-4975-a668-0b20f9356f08] Running
	I1126 20:23:46.316570  290654 system_pods.go:89] "kindnet-s68gz" [fd077140-84bb-48cf-9f71-834b22cde6da] Running
	I1126 20:23:46.316579  290654 system_pods.go:89] "kube-apiserver-auto-825702" [61cfca9f-0130-497e-b843-520e347eac4f] Running
	I1126 20:23:46.316588  290654 system_pods.go:89] "kube-controller-manager-auto-825702" [c0f893ee-23e9-40ed-9718-2404eff503b8] Running
	I1126 20:23:46.316594  290654 system_pods.go:89] "kube-proxy-zj978" [c564e4fc-2759-4b29-ad81-85dcd85e1f5d] Running
	I1126 20:23:46.316600  290654 system_pods.go:89] "kube-scheduler-auto-825702" [d708478b-9df4-4e77-ad8e-4c9d748a297a] Running
	I1126 20:23:46.316610  290654 system_pods.go:89] "storage-provisioner" [e6be533e-b1d5-4636-b530-7e8f9cef6c3b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 20:23:46.316630  290654 retry.go:31] will retry after 505.999214ms: missing components: kube-dns
	I1126 20:23:46.827423  290654 system_pods.go:86] 8 kube-system pods found
	I1126 20:23:46.827452  290654 system_pods.go:89] "coredns-66bc5c9577-8lbn9" [9ede6da3-5188-4b17-ad41-9add6a1cc659] Running
	I1126 20:23:46.827486  290654 system_pods.go:89] "etcd-auto-825702" [5d7b22d0-d28f-4975-a668-0b20f9356f08] Running
	I1126 20:23:46.827498  290654 system_pods.go:89] "kindnet-s68gz" [fd077140-84bb-48cf-9f71-834b22cde6da] Running
	I1126 20:23:46.827504  290654 system_pods.go:89] "kube-apiserver-auto-825702" [61cfca9f-0130-497e-b843-520e347eac4f] Running
	I1126 20:23:46.827510  290654 system_pods.go:89] "kube-controller-manager-auto-825702" [c0f893ee-23e9-40ed-9718-2404eff503b8] Running
	I1126 20:23:46.827516  290654 system_pods.go:89] "kube-proxy-zj978" [c564e4fc-2759-4b29-ad81-85dcd85e1f5d] Running
	I1126 20:23:46.827523  290654 system_pods.go:89] "kube-scheduler-auto-825702" [d708478b-9df4-4e77-ad8e-4c9d748a297a] Running
	I1126 20:23:46.827529  290654 system_pods.go:89] "storage-provisioner" [e6be533e-b1d5-4636-b530-7e8f9cef6c3b] Running
	I1126 20:23:46.827539  290654 system_pods.go:126] duration metric: took 1.996744001s to wait for k8s-apps to be running ...
	I1126 20:23:46.827548  290654 system_svc.go:44] waiting for kubelet service to be running ....
	I1126 20:23:46.827596  290654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:23:46.842254  290654 system_svc.go:56] duration metric: took 14.697934ms WaitForService to wait for kubelet
	I1126 20:23:46.842281  290654 kubeadm.go:587] duration metric: took 13.651978541s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1126 20:23:46.842299  290654 node_conditions.go:102] verifying NodePressure condition ...
	I1126 20:23:46.845215  290654 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1126 20:23:46.845239  290654 node_conditions.go:123] node cpu capacity is 8
	I1126 20:23:46.845254  290654 node_conditions.go:105] duration metric: took 2.949322ms to run NodePressure ...
	I1126 20:23:46.845264  290654 start.go:242] waiting for startup goroutines ...
	I1126 20:23:46.845271  290654 start.go:247] waiting for cluster config update ...
	I1126 20:23:46.845279  290654 start.go:256] writing updated cluster config ...
	I1126 20:23:46.845537  290654 ssh_runner.go:195] Run: rm -f paused
	I1126 20:23:46.849594  290654 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 20:23:46.853796  290654 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8lbn9" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:46.858602  290654 pod_ready.go:94] pod "coredns-66bc5c9577-8lbn9" is "Ready"
	I1126 20:23:46.858622  290654 pod_ready.go:86] duration metric: took 4.805106ms for pod "coredns-66bc5c9577-8lbn9" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:46.860617  290654 pod_ready.go:83] waiting for pod "etcd-auto-825702" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:46.864506  290654 pod_ready.go:94] pod "etcd-auto-825702" is "Ready"
	I1126 20:23:46.864526  290654 pod_ready.go:86] duration metric: took 3.888732ms for pod "etcd-auto-825702" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:46.866346  290654 pod_ready.go:83] waiting for pod "kube-apiserver-auto-825702" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:46.869886  290654 pod_ready.go:94] pod "kube-apiserver-auto-825702" is "Ready"
	I1126 20:23:46.869905  290654 pod_ready.go:86] duration metric: took 3.542192ms for pod "kube-apiserver-auto-825702" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:46.871974  290654 pod_ready.go:83] waiting for pod "kube-controller-manager-auto-825702" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:47.254559  290654 pod_ready.go:94] pod "kube-controller-manager-auto-825702" is "Ready"
	I1126 20:23:47.254582  290654 pod_ready.go:86] duration metric: took 382.5885ms for pod "kube-controller-manager-auto-825702" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:47.454812  290654 pod_ready.go:83] waiting for pod "kube-proxy-zj978" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:47.854284  290654 pod_ready.go:94] pod "kube-proxy-zj978" is "Ready"
	I1126 20:23:47.854314  290654 pod_ready.go:86] duration metric: took 399.473345ms for pod "kube-proxy-zj978" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:48.054761  290654 pod_ready.go:83] waiting for pod "kube-scheduler-auto-825702" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:48.454164  290654 pod_ready.go:94] pod "kube-scheduler-auto-825702" is "Ready"
	I1126 20:23:48.454189  290654 pod_ready.go:86] duration metric: took 399.401067ms for pod "kube-scheduler-auto-825702" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:23:48.454200  290654 pod_ready.go:40] duration metric: took 1.604575261s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 20:23:48.503227  290654 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1126 20:23:48.505850  290654 out.go:179] * Done! kubectl is now configured to use "auto-825702" cluster and "default" namespace by default
	I1126 20:23:45.793335  299373 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21974-10722/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-825702:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir: (4.54296022s)
	I1126 20:23:45.793367  299373 kic.go:203] duration metric: took 4.543107725s to extract preloaded images to volume ...
	W1126 20:23:45.793472  299373 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1126 20:23:45.793515  299373 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1126 20:23:45.793561  299373 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1126 20:23:45.856182  299373 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-825702 --name kindnet-825702 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-825702 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-825702 --network kindnet-825702 --ip 192.168.76.2 --volume kindnet-825702:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1126 20:23:46.203867  299373 cli_runner.go:164] Run: docker container inspect kindnet-825702 --format={{.State.Running}}
	I1126 20:23:46.223678  299373 cli_runner.go:164] Run: docker container inspect kindnet-825702 --format={{.State.Status}}
	I1126 20:23:46.243533  299373 cli_runner.go:164] Run: docker exec kindnet-825702 stat /var/lib/dpkg/alternatives/iptables
	I1126 20:23:46.315187  299373 oci.go:144] the created container "kindnet-825702" has a running status.
	I1126 20:23:46.315216  299373 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21974-10722/.minikube/machines/kindnet-825702/id_rsa...
	I1126 20:23:46.351483  299373 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21974-10722/.minikube/machines/kindnet-825702/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1126 20:23:46.729438  299373 cli_runner.go:164] Run: docker container inspect kindnet-825702 --format={{.State.Status}}
	I1126 20:23:46.749322  299373 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1126 20:23:46.749342  299373 kic_runner.go:114] Args: [docker exec --privileged kindnet-825702 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1126 20:23:46.798515  299373 cli_runner.go:164] Run: docker container inspect kindnet-825702 --format={{.State.Status}}
	I1126 20:23:46.817923  299373 machine.go:94] provisionDockerMachine start ...
	I1126 20:23:46.818039  299373 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-825702
	I1126 20:23:46.838583  299373 main.go:143] libmachine: Using SSH client type: native
	I1126 20:23:46.838912  299373 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1126 20:23:46.838934  299373 main.go:143] libmachine: About to run SSH command:
	hostname
	I1126 20:23:46.986273  299373 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-825702
	
	I1126 20:23:46.986304  299373 ubuntu.go:182] provisioning hostname "kindnet-825702"
	I1126 20:23:46.986353  299373 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-825702
	I1126 20:23:47.006139  299373 main.go:143] libmachine: Using SSH client type: native
	I1126 20:23:47.006352  299373 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1126 20:23:47.006367  299373 main.go:143] libmachine: About to run SSH command:
	sudo hostname kindnet-825702 && echo "kindnet-825702" | sudo tee /etc/hostname
	I1126 20:23:47.158629  299373 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-825702
	
	I1126 20:23:47.158691  299373 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-825702
	I1126 20:23:47.177826  299373 main.go:143] libmachine: Using SSH client type: native
	I1126 20:23:47.178121  299373 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1126 20:23:47.178143  299373 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-825702' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-825702/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-825702' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1126 20:23:47.322594  299373 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1126 20:23:47.322622  299373 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21974-10722/.minikube CaCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21974-10722/.minikube}
	I1126 20:23:47.322667  299373 ubuntu.go:190] setting up certificates
	I1126 20:23:47.322680  299373 provision.go:84] configureAuth start
	I1126 20:23:47.322728  299373 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-825702
	I1126 20:23:47.342348  299373 provision.go:143] copyHostCerts
	I1126 20:23:47.342417  299373 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-10722/.minikube/cert.pem, removing ...
	I1126 20:23:47.342432  299373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-10722/.minikube/cert.pem
	I1126 20:23:47.342542  299373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/cert.pem (1123 bytes)
	I1126 20:23:47.342656  299373 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-10722/.minikube/key.pem, removing ...
	I1126 20:23:47.342670  299373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-10722/.minikube/key.pem
	I1126 20:23:47.342710  299373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/key.pem (1675 bytes)
	I1126 20:23:47.342774  299373 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-10722/.minikube/ca.pem, removing ...
	I1126 20:23:47.342790  299373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-10722/.minikube/ca.pem
	I1126 20:23:47.342827  299373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/ca.pem (1078 bytes)
	I1126 20:23:47.342900  299373 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem org=jenkins.kindnet-825702 san=[127.0.0.1 192.168.76.2 kindnet-825702 localhost minikube]
	I1126 20:23:47.508300  299373 provision.go:177] copyRemoteCerts
	I1126 20:23:47.508346  299373 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1126 20:23:47.508380  299373 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-825702
	I1126 20:23:47.527269  299373 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/kindnet-825702/id_rsa Username:docker}
	I1126 20:23:47.628450  299373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1126 20:23:47.652806  299373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1126 20:23:47.675009  299373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1126 20:23:47.693395  299373 provision.go:87] duration metric: took 370.702667ms to configureAuth
	I1126 20:23:47.693421  299373 ubuntu.go:206] setting minikube options for container-runtime
	I1126 20:23:47.693603  299373 config.go:182] Loaded profile config "kindnet-825702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:23:47.693712  299373 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-825702
	I1126 20:23:47.715012  299373 main.go:143] libmachine: Using SSH client type: native
	I1126 20:23:47.715304  299373 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1126 20:23:47.715327  299373 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1126 20:23:48.029803  299373 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1126 20:23:48.029824  299373 machine.go:97] duration metric: took 1.211848447s to provisionDockerMachine
	I1126 20:23:48.029838  299373 client.go:176] duration metric: took 7.356770582s to LocalClient.Create
	I1126 20:23:48.029849  299373 start.go:167] duration metric: took 7.35683624s to libmachine.API.Create "kindnet-825702"
	I1126 20:23:48.029855  299373 start.go:293] postStartSetup for "kindnet-825702" (driver="docker")
	I1126 20:23:48.029864  299373 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1126 20:23:48.029912  299373 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1126 20:23:48.029949  299373 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-825702
	I1126 20:23:48.052127  299373 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/kindnet-825702/id_rsa Username:docker}
	I1126 20:23:48.157496  299373 ssh_runner.go:195] Run: cat /etc/os-release
	I1126 20:23:48.161331  299373 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1126 20:23:48.161355  299373 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1126 20:23:48.161367  299373 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-10722/.minikube/addons for local assets ...
	I1126 20:23:48.161438  299373 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-10722/.minikube/files for local assets ...
	I1126 20:23:48.161544  299373 filesync.go:149] local asset: /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem -> 142582.pem in /etc/ssl/certs
	I1126 20:23:48.161673  299373 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1126 20:23:48.168981  299373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem --> /etc/ssl/certs/142582.pem (1708 bytes)
	I1126 20:23:48.188573  299373 start.go:296] duration metric: took 158.707835ms for postStartSetup
	I1126 20:23:48.188945  299373 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-825702
	I1126 20:23:48.210197  299373 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kindnet-825702/config.json ...
	I1126 20:23:48.210417  299373 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1126 20:23:48.210479  299373 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-825702
	I1126 20:23:48.229744  299373 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/kindnet-825702/id_rsa Username:docker}
	I1126 20:23:48.328760  299373 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1126 20:23:48.333488  299373 start.go:128] duration metric: took 7.662588038s to createHost
	I1126 20:23:48.333515  299373 start.go:83] releasing machines lock for "kindnet-825702", held for 7.662752378s
	I1126 20:23:48.333591  299373 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-825702
	I1126 20:23:48.352907  299373 ssh_runner.go:195] Run: cat /version.json
	I1126 20:23:48.352958  299373 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-825702
	I1126 20:23:48.352989  299373 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1126 20:23:48.353065  299373 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-825702
	I1126 20:23:48.371914  299373 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/kindnet-825702/id_rsa Username:docker}
	I1126 20:23:48.374515  299373 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/kindnet-825702/id_rsa Username:docker}
	I1126 20:23:48.548180  299373 ssh_runner.go:195] Run: systemctl --version
	I1126 20:23:48.556577  299373 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1126 20:23:48.596317  299373 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1126 20:23:48.602266  299373 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1126 20:23:48.602363  299373 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1126 20:23:48.632866  299373 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1126 20:23:48.632893  299373 start.go:496] detecting cgroup driver to use...
	I1126 20:23:48.632925  299373 detect.go:190] detected "systemd" cgroup driver on host os
	I1126 20:23:48.632970  299373 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1126 20:23:48.653586  299373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1126 20:23:48.669406  299373 docker.go:218] disabling cri-docker service (if available) ...
	I1126 20:23:48.669588  299373 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1126 20:23:48.692169  299373 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1126 20:23:48.715327  299373 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1126 20:23:48.820563  299373 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1126 20:23:48.920490  299373 docker.go:234] disabling docker service ...
	I1126 20:23:48.920556  299373 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1126 20:23:48.943723  299373 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1126 20:23:48.957969  299373 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1126 20:23:49.082396  299373 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1126 20:23:49.176356  299373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1126 20:23:49.190094  299373 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1126 20:23:49.207004  299373 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1126 20:23:49.207073  299373 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:23:49.217706  299373 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1126 20:23:49.217763  299373 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:23:49.226617  299373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:23:49.235292  299373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:23:49.243823  299373 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1126 20:23:49.251931  299373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:23:49.260429  299373 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:23:49.274135  299373 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:23:49.283794  299373 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1126 20:23:49.291047  299373 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1126 20:23:49.298102  299373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:23:49.385437  299373 ssh_runner.go:195] Run: sudo systemctl restart crio
	W1126 20:23:48.200948  292013 pod_ready.go:104] pod "coredns-66bc5c9577-tpmmm" is not "Ready", error: <nil>
	W1126 20:23:50.201099  292013 pod_ready.go:104] pod "coredns-66bc5c9577-tpmmm" is not "Ready", error: <nil>
	I1126 20:23:51.787724  299373 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.401986212s)
	I1126 20:23:51.787765  299373 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1126 20:23:51.787822  299373 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1126 20:23:51.794485  299373 start.go:564] Will wait 60s for crictl version
	I1126 20:23:51.794545  299373 ssh_runner.go:195] Run: which crictl
	I1126 20:23:51.800840  299373 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1126 20:23:51.839310  299373 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1126 20:23:51.839418  299373 ssh_runner.go:195] Run: crio --version
	I1126 20:23:51.880720  299373 ssh_runner.go:195] Run: crio --version
	I1126 20:23:51.927354  299373 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1126 20:23:47.384542  301922 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1126 20:23:47.384764  301922 start.go:159] libmachine.API.Create for "calico-825702" (driver="docker")
	I1126 20:23:47.384819  301922 client.go:173] LocalClient.Create starting
	I1126 20:23:47.384897  301922 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem
	I1126 20:23:47.384930  301922 main.go:143] libmachine: Decoding PEM data...
	I1126 20:23:47.384948  301922 main.go:143] libmachine: Parsing certificate...
	I1126 20:23:47.385006  301922 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem
	I1126 20:23:47.385026  301922 main.go:143] libmachine: Decoding PEM data...
	I1126 20:23:47.385037  301922 main.go:143] libmachine: Parsing certificate...
	I1126 20:23:47.385331  301922 cli_runner.go:164] Run: docker network inspect calico-825702 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1126 20:23:47.402007  301922 cli_runner.go:211] docker network inspect calico-825702 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1126 20:23:47.402067  301922 network_create.go:284] running [docker network inspect calico-825702] to gather additional debugging logs...
	I1126 20:23:47.402089  301922 cli_runner.go:164] Run: docker network inspect calico-825702
	W1126 20:23:47.418384  301922 cli_runner.go:211] docker network inspect calico-825702 returned with exit code 1
	I1126 20:23:47.418403  301922 network_create.go:287] error running [docker network inspect calico-825702]: docker network inspect calico-825702: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-825702 not found
	I1126 20:23:47.418424  301922 network_create.go:289] output of [docker network inspect calico-825702]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-825702 not found
	
	** /stderr **
	I1126 20:23:47.418562  301922 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1126 20:23:47.437835  301922 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9f2fbfec5d4b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:aa:65:0e:d3:0d:13} reservation:<nil>}
	I1126 20:23:47.438546  301922 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bbc6aaddce08 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3a:93:ea:3f:0d:7e} reservation:<nil>}
	I1126 20:23:47.439254  301922 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ecd673f5b2f2 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:de:d0:57:1a:06:91} reservation:<nil>}
	I1126 20:23:47.440093  301922 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-bbb3cbf3682c IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:9a:00:30:1a:2a:ff} reservation:<nil>}
	I1126 20:23:47.440717  301922 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-ec68256d4118 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:72:5d:f9:71:de:9b} reservation:<nil>}
	I1126 20:23:47.441770  301922 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f882f0}
	I1126 20:23:47.441805  301922 network_create.go:124] attempt to create docker network calico-825702 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1126 20:23:47.441857  301922 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-825702 calico-825702
	I1126 20:23:47.490192  301922 network_create.go:108] docker network calico-825702 192.168.94.0/24 created
	I1126 20:23:47.490227  301922 kic.go:121] calculated static IP "192.168.94.2" for the "calico-825702" container
	I1126 20:23:47.490292  301922 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1126 20:23:47.510542  301922 cli_runner.go:164] Run: docker volume create calico-825702 --label name.minikube.sigs.k8s.io=calico-825702 --label created_by.minikube.sigs.k8s.io=true
	I1126 20:23:47.529620  301922 oci.go:103] Successfully created a docker volume calico-825702
	I1126 20:23:47.529730  301922 cli_runner.go:164] Run: docker run --rm --name calico-825702-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-825702 --entrypoint /usr/bin/test -v calico-825702:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1126 20:23:47.919043  301922 oci.go:107] Successfully prepared a docker volume calico-825702
	I1126 20:23:47.919105  301922 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:23:47.919118  301922 kic.go:194] Starting extracting preloaded images to volume ...
	I1126 20:23:47.919188  301922 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21974-10722/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-825702:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir
	I1126 20:23:51.718046  301922 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21974-10722/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-825702:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir: (3.798811549s)
	I1126 20:23:51.718073  301922 kic.go:203] duration metric: took 3.79895426s to extract preloaded images to volume ...
	W1126 20:23:51.718159  301922 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1126 20:23:51.718205  301922 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1126 20:23:51.718250  301922 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1126 20:23:51.803062  301922 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-825702 --name calico-825702 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-825702 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-825702 --network calico-825702 --ip 192.168.94.2 --volume calico-825702:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1126 20:23:51.928816  299373 cli_runner.go:164] Run: docker network inspect kindnet-825702 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1126 20:23:51.953067  299373 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1126 20:23:51.959158  299373 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:23:51.972160  299373 kubeadm.go:884] updating cluster {Name:kindnet-825702 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-825702 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1126 20:23:51.972304  299373 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:23:51.972357  299373 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:23:52.026301  299373 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:23:52.026330  299373 crio.go:433] Images already preloaded, skipping extraction
	I1126 20:23:52.026384  299373 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:23:52.064356  299373 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:23:52.064433  299373 cache_images.go:86] Images are preloaded, skipping loading
	I1126 20:23:52.064478  299373 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1126 20:23:52.064645  299373 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kindnet-825702 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:kindnet-825702 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I1126 20:23:52.064865  299373 ssh_runner.go:195] Run: crio config
	I1126 20:23:52.143227  299373 cni.go:84] Creating CNI manager for "kindnet"
	I1126 20:23:52.143261  299373 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1126 20:23:52.143292  299373 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-825702 NodeName:kindnet-825702 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1126 20:23:52.143453  299373 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-825702"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1126 20:23:52.143551  299373 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1126 20:23:52.154670  299373 binaries.go:51] Found k8s binaries, skipping transfer
	I1126 20:23:52.154742  299373 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1126 20:23:52.164507  299373 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (364 bytes)
	I1126 20:23:52.183112  299373 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1126 20:23:52.204293  299373 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1126 20:23:52.221216  299373 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1126 20:23:52.225616  299373 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:23:52.237744  299373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:23:52.347578  299373 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:23:52.372296  299373 certs.go:69] Setting up /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kindnet-825702 for IP: 192.168.76.2
	I1126 20:23:52.372317  299373 certs.go:195] generating shared ca certs ...
	I1126 20:23:52.372337  299373 certs.go:227] acquiring lock for ca certs: {Name:mk4795cc985d88cb969cdd6a9c35d3c72f02dfea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:23:52.372557  299373 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21974-10722/.minikube/ca.key
	I1126 20:23:52.372629  299373 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.key
	I1126 20:23:52.372644  299373 certs.go:257] generating profile certs ...
	I1126 20:23:52.372723  299373 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kindnet-825702/client.key
	I1126 20:23:52.372738  299373 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kindnet-825702/client.crt with IP's: []
	I1126 20:23:52.451226  299373 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kindnet-825702/client.crt ...
	I1126 20:23:52.451260  299373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kindnet-825702/client.crt: {Name:mke953dd7968e23857340a97386719eb22be1c4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:23:52.451443  299373 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kindnet-825702/client.key ...
	I1126 20:23:52.451473  299373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kindnet-825702/client.key: {Name:mk2677457717c3733bee89a1d00ffb348a73cf4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:23:52.451607  299373 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kindnet-825702/apiserver.key.f4b9e689
	I1126 20:23:52.451632  299373 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kindnet-825702/apiserver.crt.f4b9e689 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1126 20:23:52.659651  299373 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kindnet-825702/apiserver.crt.f4b9e689 ...
	I1126 20:23:52.659675  299373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kindnet-825702/apiserver.crt.f4b9e689: {Name:mk4ab2cf14f50dedda220d9db59a04a09297a2df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:23:52.659851  299373 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kindnet-825702/apiserver.key.f4b9e689 ...
	I1126 20:23:52.659869  299373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kindnet-825702/apiserver.key.f4b9e689: {Name:mk0378cd86c079f093c2f36200397fef79275ae3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:23:52.659967  299373 certs.go:382] copying /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kindnet-825702/apiserver.crt.f4b9e689 -> /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kindnet-825702/apiserver.crt
	I1126 20:23:52.660058  299373 certs.go:386] copying /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kindnet-825702/apiserver.key.f4b9e689 -> /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kindnet-825702/apiserver.key
	I1126 20:23:52.660131  299373 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kindnet-825702/proxy-client.key
	I1126 20:23:52.660153  299373 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kindnet-825702/proxy-client.crt with IP's: []
	I1126 20:23:52.703185  299373 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kindnet-825702/proxy-client.crt ...
	I1126 20:23:52.703213  299373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kindnet-825702/proxy-client.crt: {Name:mkf29caa948722fe21b05adfd9a6900914e9f54f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:23:52.703368  299373 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kindnet-825702/proxy-client.key ...
	I1126 20:23:52.703382  299373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kindnet-825702/proxy-client.key: {Name:mkbbd595893f48f215568b4c50887a57446454b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:23:52.703651  299373 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/14258.pem (1338 bytes)
	W1126 20:23:52.703710  299373 certs.go:480] ignoring /home/jenkins/minikube-integration/21974-10722/.minikube/certs/14258_empty.pem, impossibly tiny 0 bytes
	I1126 20:23:52.703727  299373 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem (1679 bytes)
	I1126 20:23:52.703768  299373 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem (1078 bytes)
	I1126 20:23:52.703806  299373 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem (1123 bytes)
	I1126 20:23:52.703843  299373 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem (1675 bytes)
	I1126 20:23:52.703915  299373 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem (1708 bytes)
	I1126 20:23:52.704614  299373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1126 20:23:52.726616  299373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1126 20:23:52.749402  299373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1126 20:23:52.773210  299373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1126 20:23:52.796723  299373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kindnet-825702/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1126 20:23:52.817151  299373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kindnet-825702/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1126 20:23:52.839422  299373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kindnet-825702/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1126 20:23:52.862609  299373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kindnet-825702/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1126 20:23:52.884097  299373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/certs/14258.pem --> /usr/share/ca-certificates/14258.pem (1338 bytes)
	I1126 20:23:52.909769  299373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem --> /usr/share/ca-certificates/142582.pem (1708 bytes)
	I1126 20:23:52.935154  299373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1126 20:23:52.956212  299373 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1126 20:23:52.973806  299373 ssh_runner.go:195] Run: openssl version
	I1126 20:23:52.981751  299373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142582.pem && ln -fs /usr/share/ca-certificates/142582.pem /etc/ssl/certs/142582.pem"
	I1126 20:23:52.992940  299373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142582.pem
	I1126 20:23:52.998818  299373 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 26 19:41 /usr/share/ca-certificates/142582.pem
	I1126 20:23:52.998873  299373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142582.pem
	I1126 20:23:53.046862  299373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142582.pem /etc/ssl/certs/3ec20f2e.0"
	I1126 20:23:53.056085  299373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1126 20:23:53.065426  299373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:23:53.069888  299373 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 26 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:23:53.069942  299373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:23:53.113677  299373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1126 20:23:53.124310  299373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14258.pem && ln -fs /usr/share/ca-certificates/14258.pem /etc/ssl/certs/14258.pem"
	I1126 20:23:53.134994  299373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14258.pem
	I1126 20:23:53.139085  299373 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 26 19:41 /usr/share/ca-certificates/14258.pem
	I1126 20:23:53.139153  299373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14258.pem
	I1126 20:23:53.186819  299373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14258.pem /etc/ssl/certs/51391683.0"
	I1126 20:23:53.197687  299373 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1126 20:23:53.202789  299373 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1126 20:23:53.202844  299373 kubeadm.go:401] StartCluster: {Name:kindnet-825702 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-825702 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:23:53.202948  299373 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 20:23:53.203003  299373 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 20:23:53.237609  299373 cri.go:89] found id: ""
	I1126 20:23:53.237681  299373 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1126 20:23:53.247899  299373 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1126 20:23:53.258375  299373 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1126 20:23:53.258426  299373 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1126 20:23:53.268115  299373 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1126 20:23:53.268135  299373 kubeadm.go:158] found existing configuration files:
	
	I1126 20:23:53.268185  299373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1126 20:23:53.277837  299373 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1126 20:23:53.277889  299373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1126 20:23:53.287374  299373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1126 20:23:53.296497  299373 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1126 20:23:53.296543  299373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1126 20:23:53.305110  299373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1126 20:23:53.314492  299373 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1126 20:23:53.314541  299373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1126 20:23:53.323438  299373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1126 20:23:53.332867  299373 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1126 20:23:53.332912  299373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1126 20:23:53.342352  299373 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1126 20:23:53.407775  299373 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1126 20:23:53.472376  299373 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1126 20:23:52.232689  301922 cli_runner.go:164] Run: docker container inspect calico-825702 --format={{.State.Running}}
	I1126 20:23:52.254397  301922 cli_runner.go:164] Run: docker container inspect calico-825702 --format={{.State.Status}}
	I1126 20:23:52.279673  301922 cli_runner.go:164] Run: docker exec calico-825702 stat /var/lib/dpkg/alternatives/iptables
	I1126 20:23:52.346298  301922 oci.go:144] the created container "calico-825702" has a running status.
	I1126 20:23:52.346330  301922 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21974-10722/.minikube/machines/calico-825702/id_rsa...
	I1126 20:23:52.845596  301922 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21974-10722/.minikube/machines/calico-825702/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1126 20:23:52.879697  301922 cli_runner.go:164] Run: docker container inspect calico-825702 --format={{.State.Status}}
	I1126 20:23:52.903068  301922 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1126 20:23:52.903102  301922 kic_runner.go:114] Args: [docker exec --privileged calico-825702 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1126 20:23:52.960815  301922 cli_runner.go:164] Run: docker container inspect calico-825702 --format={{.State.Status}}
	I1126 20:23:52.983354  301922 machine.go:94] provisionDockerMachine start ...
	I1126 20:23:52.983442  301922 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-825702
	I1126 20:23:53.007190  301922 main.go:143] libmachine: Using SSH client type: native
	I1126 20:23:53.007523  301922 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1126 20:23:53.007544  301922 main.go:143] libmachine: About to run SSH command:
	hostname
	I1126 20:23:53.152041  301922 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-825702
	
	I1126 20:23:53.152070  301922 ubuntu.go:182] provisioning hostname "calico-825702"
	I1126 20:23:53.152128  301922 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-825702
	I1126 20:23:53.175500  301922 main.go:143] libmachine: Using SSH client type: native
	I1126 20:23:53.175790  301922 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1126 20:23:53.175814  301922 main.go:143] libmachine: About to run SSH command:
	sudo hostname calico-825702 && echo "calico-825702" | sudo tee /etc/hostname
	I1126 20:23:53.344912  301922 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-825702
	
	I1126 20:23:53.344997  301922 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-825702
	I1126 20:23:53.366624  301922 main.go:143] libmachine: Using SSH client type: native
	I1126 20:23:53.366927  301922 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1126 20:23:53.366955  301922 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-825702' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-825702/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-825702' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1126 20:23:53.515999  301922 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1126 20:23:53.516036  301922 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21974-10722/.minikube CaCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21974-10722/.minikube}
	I1126 20:23:53.516083  301922 ubuntu.go:190] setting up certificates
	I1126 20:23:53.516098  301922 provision.go:84] configureAuth start
	I1126 20:23:53.516160  301922 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-825702
	I1126 20:23:53.534836  301922 provision.go:143] copyHostCerts
	I1126 20:23:53.534900  301922 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-10722/.minikube/cert.pem, removing ...
	I1126 20:23:53.534908  301922 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-10722/.minikube/cert.pem
	I1126 20:23:53.534988  301922 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/cert.pem (1123 bytes)
	I1126 20:23:53.535091  301922 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-10722/.minikube/key.pem, removing ...
	I1126 20:23:53.535102  301922 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-10722/.minikube/key.pem
	I1126 20:23:53.535138  301922 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/key.pem (1675 bytes)
	I1126 20:23:53.535209  301922 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-10722/.minikube/ca.pem, removing ...
	I1126 20:23:53.535218  301922 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-10722/.minikube/ca.pem
	I1126 20:23:53.535248  301922 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21974-10722/.minikube/ca.pem (1078 bytes)
	I1126 20:23:53.535315  301922 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem org=jenkins.calico-825702 san=[127.0.0.1 192.168.94.2 calico-825702 localhost minikube]
	I1126 20:23:53.591294  301922 provision.go:177] copyRemoteCerts
	I1126 20:23:53.591360  301922 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1126 20:23:53.591434  301922 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-825702
	I1126 20:23:53.608811  301922 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/calico-825702/id_rsa Username:docker}
	I1126 20:23:53.707057  301922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1126 20:23:53.725312  301922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1126 20:23:53.741919  301922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1126 20:23:53.758223  301922 provision.go:87] duration metric: took 242.112645ms to configureAuth
	I1126 20:23:53.758245  301922 ubuntu.go:206] setting minikube options for container-runtime
	I1126 20:23:53.758385  301922 config.go:182] Loaded profile config "calico-825702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:23:53.758550  301922 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-825702
	I1126 20:23:53.776347  301922 main.go:143] libmachine: Using SSH client type: native
	I1126 20:23:53.776605  301922 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1126 20:23:53.776629  301922 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1126 20:23:54.050892  301922 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1126 20:23:54.050915  301922 machine.go:97] duration metric: took 1.067537807s to provisionDockerMachine
	I1126 20:23:54.050927  301922 client.go:176] duration metric: took 6.666100233s to LocalClient.Create
	I1126 20:23:54.050948  301922 start.go:167] duration metric: took 6.66618385s to libmachine.API.Create "calico-825702"
	I1126 20:23:54.050959  301922 start.go:293] postStartSetup for "calico-825702" (driver="docker")
	I1126 20:23:54.050974  301922 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1126 20:23:54.051055  301922 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1126 20:23:54.051102  301922 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-825702
	I1126 20:23:54.069703  301922 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/calico-825702/id_rsa Username:docker}
	I1126 20:23:54.170490  301922 ssh_runner.go:195] Run: cat /etc/os-release
	I1126 20:23:54.173899  301922 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1126 20:23:54.173923  301922 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1126 20:23:54.173934  301922 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-10722/.minikube/addons for local assets ...
	I1126 20:23:54.173990  301922 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-10722/.minikube/files for local assets ...
	I1126 20:23:54.174079  301922 filesync.go:149] local asset: /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem -> 142582.pem in /etc/ssl/certs
	I1126 20:23:54.174193  301922 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1126 20:23:54.181770  301922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem --> /etc/ssl/certs/142582.pem (1708 bytes)
	I1126 20:23:54.203567  301922 start.go:296] duration metric: took 152.593375ms for postStartSetup
	I1126 20:23:54.203966  301922 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-825702
	I1126 20:23:54.227877  301922 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/calico-825702/config.json ...
	I1126 20:23:54.228113  301922 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1126 20:23:54.228153  301922 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-825702
	I1126 20:23:54.246033  301922 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/calico-825702/id_rsa Username:docker}
	I1126 20:23:54.342660  301922 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1126 20:23:54.347188  301922 start.go:128] duration metric: took 6.965258038s to createHost
	I1126 20:23:54.347209  301922 start.go:83] releasing machines lock for "calico-825702", held for 6.965359491s
	I1126 20:23:54.347278  301922 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-825702
	I1126 20:23:54.365495  301922 ssh_runner.go:195] Run: cat /version.json
	I1126 20:23:54.365535  301922 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1126 20:23:54.365552  301922 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-825702
	I1126 20:23:54.365614  301922 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-825702
	I1126 20:23:54.384799  301922 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/calico-825702/id_rsa Username:docker}
	I1126 20:23:54.385666  301922 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/calico-825702/id_rsa Username:docker}
	I1126 20:23:54.550418  301922 ssh_runner.go:195] Run: systemctl --version
	I1126 20:23:54.557448  301922 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1126 20:23:54.590674  301922 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1126 20:23:54.594999  301922 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1126 20:23:54.595057  301922 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1126 20:23:54.619364  301922 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1126 20:23:54.619381  301922 start.go:496] detecting cgroup driver to use...
	I1126 20:23:54.619405  301922 detect.go:190] detected "systemd" cgroup driver on host os
	I1126 20:23:54.619441  301922 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1126 20:23:54.634668  301922 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1126 20:23:54.646269  301922 docker.go:218] disabling cri-docker service (if available) ...
	I1126 20:23:54.646312  301922 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1126 20:23:54.661594  301922 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1126 20:23:54.682857  301922 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1126 20:23:54.773728  301922 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1126 20:23:54.870129  301922 docker.go:234] disabling docker service ...
	I1126 20:23:54.870190  301922 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1126 20:23:54.887501  301922 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1126 20:23:54.899403  301922 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1126 20:23:54.985503  301922 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1126 20:23:55.066410  301922 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1126 20:23:55.078398  301922 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1126 20:23:55.092208  301922 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1126 20:23:55.092272  301922 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:23:55.104946  301922 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1126 20:23:55.105002  301922 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:23:55.113316  301922 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:23:55.121861  301922 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:23:55.130432  301922 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1126 20:23:55.137951  301922 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:23:55.145917  301922 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:23:55.159553  301922 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:23:55.167554  301922 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1126 20:23:55.174353  301922 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1126 20:23:55.181028  301922 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:23:55.257553  301922 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1126 20:23:55.564077  301922 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1126 20:23:55.564154  301922 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1126 20:23:55.568154  301922 start.go:564] Will wait 60s for crictl version
	I1126 20:23:55.568207  301922 ssh_runner.go:195] Run: which crictl
	I1126 20:23:55.571719  301922 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1126 20:23:55.595758  301922 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1126 20:23:55.595834  301922 ssh_runner.go:195] Run: crio --version
	I1126 20:23:55.622499  301922 ssh_runner.go:195] Run: crio --version
	I1126 20:23:55.651199  301922 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	W1126 20:23:52.201745  292013 pod_ready.go:104] pod "coredns-66bc5c9577-tpmmm" is not "Ready", error: <nil>
	W1126 20:23:54.206831  292013 pod_ready.go:104] pod "coredns-66bc5c9577-tpmmm" is not "Ready", error: <nil>
	I1126 20:23:55.652327  301922 cli_runner.go:164] Run: docker network inspect calico-825702 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1126 20:23:55.671350  301922 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1126 20:23:55.675285  301922 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:23:55.685531  301922 kubeadm.go:884] updating cluster {Name:calico-825702 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-825702 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1126 20:23:55.685648  301922 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:23:55.685703  301922 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:23:55.718537  301922 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:23:55.718557  301922 crio.go:433] Images already preloaded, skipping extraction
	I1126 20:23:55.718601  301922 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:23:55.743609  301922 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:23:55.743627  301922 cache_images.go:86] Images are preloaded, skipping loading
	I1126 20:23:55.743636  301922 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1126 20:23:55.743732  301922 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=calico-825702 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:calico-825702 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I1126 20:23:55.743808  301922 ssh_runner.go:195] Run: crio config
	I1126 20:23:55.792565  301922 cni.go:84] Creating CNI manager for "calico"
	I1126 20:23:55.792593  301922 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1126 20:23:55.792612  301922 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-825702 NodeName:calico-825702 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1126 20:23:55.792751  301922 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-825702"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1126 20:23:55.792832  301922 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1126 20:23:55.800817  301922 binaries.go:51] Found k8s binaries, skipping transfer
	I1126 20:23:55.800879  301922 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1126 20:23:55.808376  301922 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1126 20:23:55.820303  301922 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1126 20:23:55.834946  301922 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1126 20:23:55.846922  301922 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1126 20:23:55.850109  301922 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:23:55.859786  301922 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:23:55.940056  301922 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:23:55.974712  301922 certs.go:69] Setting up /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/calico-825702 for IP: 192.168.94.2
	I1126 20:23:55.974732  301922 certs.go:195] generating shared ca certs ...
	I1126 20:23:55.974765  301922 certs.go:227] acquiring lock for ca certs: {Name:mk4795cc985d88cb969cdd6a9c35d3c72f02dfea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:23:55.974929  301922 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21974-10722/.minikube/ca.key
	I1126 20:23:55.974979  301922 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.key
	I1126 20:23:55.974992  301922 certs.go:257] generating profile certs ...
	I1126 20:23:55.975043  301922 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/calico-825702/client.key
	I1126 20:23:55.975056  301922 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/calico-825702/client.crt with IP's: []
	I1126 20:23:56.061237  301922 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/calico-825702/client.crt ...
	I1126 20:23:56.061263  301922 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/calico-825702/client.crt: {Name:mkff436b36917f3276c4d326ee3c93b943c50217 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:23:56.061470  301922 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/calico-825702/client.key ...
	I1126 20:23:56.061499  301922 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/calico-825702/client.key: {Name:mke9be3470e889891f1d211221ee9570ae55f9b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:23:56.061632  301922 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/calico-825702/apiserver.key.331a6116
	I1126 20:23:56.061651  301922 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/calico-825702/apiserver.crt.331a6116 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1126 20:23:56.182855  301922 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/calico-825702/apiserver.crt.331a6116 ...
	I1126 20:23:56.182879  301922 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/calico-825702/apiserver.crt.331a6116: {Name:mk77e9a94b309aa554928316f36f0d1850b38498 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:23:56.183045  301922 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/calico-825702/apiserver.key.331a6116 ...
	I1126 20:23:56.183062  301922 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/calico-825702/apiserver.key.331a6116: {Name:mkcafc5699394c4503433eea215ea165fccba0b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:23:56.183169  301922 certs.go:382] copying /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/calico-825702/apiserver.crt.331a6116 -> /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/calico-825702/apiserver.crt
	I1126 20:23:56.183243  301922 certs.go:386] copying /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/calico-825702/apiserver.key.331a6116 -> /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/calico-825702/apiserver.key
	I1126 20:23:56.183295  301922 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/calico-825702/proxy-client.key
	I1126 20:23:56.183311  301922 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/calico-825702/proxy-client.crt with IP's: []
	I1126 20:23:56.244960  301922 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/calico-825702/proxy-client.crt ...
	I1126 20:23:56.244983  301922 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/calico-825702/proxy-client.crt: {Name:mkc8d52691e96094e07a154dd1027b02c31d9b71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:23:56.245123  301922 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/calico-825702/proxy-client.key ...
	I1126 20:23:56.245133  301922 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/calico-825702/proxy-client.key: {Name:mk62cfb98ae6397917e957e1780d72e3effaf42f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:23:56.245294  301922 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/14258.pem (1338 bytes)
	W1126 20:23:56.245329  301922 certs.go:480] ignoring /home/jenkins/minikube-integration/21974-10722/.minikube/certs/14258_empty.pem, impossibly tiny 0 bytes
	I1126 20:23:56.245341  301922 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca-key.pem (1679 bytes)
	I1126 20:23:56.245364  301922 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/ca.pem (1078 bytes)
	I1126 20:23:56.245389  301922 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/cert.pem (1123 bytes)
	I1126 20:23:56.245411  301922 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/certs/key.pem (1675 bytes)
	I1126 20:23:56.245450  301922 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem (1708 bytes)
	I1126 20:23:56.245993  301922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1126 20:23:56.263488  301922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1126 20:23:56.280301  301922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1126 20:23:56.296600  301922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1126 20:23:56.312923  301922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/calico-825702/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1126 20:23:56.328762  301922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/calico-825702/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1126 20:23:56.344977  301922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/calico-825702/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1126 20:23:56.360896  301922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/calico-825702/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1126 20:23:56.376706  301922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/ssl/certs/142582.pem --> /usr/share/ca-certificates/142582.pem (1708 bytes)
	I1126 20:23:56.394705  301922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1126 20:23:56.410740  301922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-10722/.minikube/certs/14258.pem --> /usr/share/ca-certificates/14258.pem (1338 bytes)
	I1126 20:23:56.427214  301922 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1126 20:23:56.438949  301922 ssh_runner.go:195] Run: openssl version
	I1126 20:23:56.444826  301922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1126 20:23:56.452556  301922 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:23:56.456126  301922 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 26 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:23:56.456163  301922 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:23:56.489913  301922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1126 20:23:56.498033  301922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14258.pem && ln -fs /usr/share/ca-certificates/14258.pem /etc/ssl/certs/14258.pem"
	I1126 20:23:56.505965  301922 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14258.pem
	I1126 20:23:56.509363  301922 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 26 19:41 /usr/share/ca-certificates/14258.pem
	I1126 20:23:56.509404  301922 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14258.pem
	I1126 20:23:56.544986  301922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14258.pem /etc/ssl/certs/51391683.0"
	I1126 20:23:56.552961  301922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142582.pem && ln -fs /usr/share/ca-certificates/142582.pem /etc/ssl/certs/142582.pem"
	I1126 20:23:56.561199  301922 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142582.pem
	I1126 20:23:56.564984  301922 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 26 19:41 /usr/share/ca-certificates/142582.pem
	I1126 20:23:56.565038  301922 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142582.pem
	I1126 20:23:56.598902  301922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142582.pem /etc/ssl/certs/3ec20f2e.0"
	I1126 20:23:56.606916  301922 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1126 20:23:56.610139  301922 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1126 20:23:56.610192  301922 kubeadm.go:401] StartCluster: {Name:calico-825702 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-825702 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:23:56.610273  301922 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 20:23:56.610312  301922 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 20:23:56.637376  301922 cri.go:89] found id: ""
	I1126 20:23:56.637441  301922 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1126 20:23:56.646195  301922 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1126 20:23:56.653587  301922 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1126 20:23:56.653634  301922 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1126 20:23:56.660892  301922 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1126 20:23:56.660909  301922 kubeadm.go:158] found existing configuration files:
	
	I1126 20:23:56.660949  301922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1126 20:23:56.668113  301922 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1126 20:23:56.668156  301922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1126 20:23:56.675959  301922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1126 20:23:56.683363  301922 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1126 20:23:56.683409  301922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1126 20:23:56.691037  301922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1126 20:23:56.699566  301922 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1126 20:23:56.699618  301922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1126 20:23:56.707819  301922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1126 20:23:56.716721  301922 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1126 20:23:56.716766  301922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1126 20:23:56.723756  301922 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1126 20:23:56.779698  301922 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1126 20:23:56.838000  301922 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1126 20:23:56.701302  292013 pod_ready.go:104] pod "coredns-66bc5c9577-tpmmm" is not "Ready", error: <nil>
	W1126 20:23:59.200785  292013 pod_ready.go:104] pod "coredns-66bc5c9577-tpmmm" is not "Ready", error: <nil>
	I1126 20:24:01.200081  292013 pod_ready.go:94] pod "coredns-66bc5c9577-tpmmm" is "Ready"
	I1126 20:24:01.200103  292013 pod_ready.go:86] duration metric: took 39.505126826s for pod "coredns-66bc5c9577-tpmmm" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:24:01.202519  292013 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-178152" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:24:01.206001  292013 pod_ready.go:94] pod "etcd-default-k8s-diff-port-178152" is "Ready"
	I1126 20:24:01.206021  292013 pod_ready.go:86] duration metric: took 3.482053ms for pod "etcd-default-k8s-diff-port-178152" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:24:01.208057  292013 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-178152" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:24:01.211334  292013 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-178152" is "Ready"
	I1126 20:24:01.211354  292013 pod_ready.go:86] duration metric: took 3.278351ms for pod "kube-apiserver-default-k8s-diff-port-178152" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:24:01.212873  292013 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-178152" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:24:01.399534  292013 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-178152" is "Ready"
	I1126 20:24:01.399562  292013 pod_ready.go:86] duration metric: took 186.671342ms for pod "kube-controller-manager-default-k8s-diff-port-178152" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:24:01.600128  292013 pod_ready.go:83] waiting for pod "kube-proxy-vd7fp" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:24:01.999704  292013 pod_ready.go:94] pod "kube-proxy-vd7fp" is "Ready"
	I1126 20:24:01.999736  292013 pod_ready.go:86] duration metric: took 399.560549ms for pod "kube-proxy-vd7fp" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:24:02.199058  292013 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-178152" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:24:02.598494  292013 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-178152" is "Ready"
	I1126 20:24:02.598524  292013 pod_ready.go:86] duration metric: took 399.437453ms for pod "kube-scheduler-default-k8s-diff-port-178152" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:24:02.598549  292013 pod_ready.go:40] duration metric: took 40.90777129s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 20:24:02.652711  292013 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1126 20:24:02.654325  292013 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-178152" cluster and "default" namespace by default
	I1126 20:24:05.560744  299373 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1126 20:24:05.560802  299373 kubeadm.go:319] [preflight] Running pre-flight checks
	I1126 20:24:05.560909  299373 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1126 20:24:05.560966  299373 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1126 20:24:05.560996  299373 kubeadm.go:319] OS: Linux
	I1126 20:24:05.561053  299373 kubeadm.go:319] CGROUPS_CPU: enabled
	I1126 20:24:05.561150  299373 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1126 20:24:05.561228  299373 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1126 20:24:05.561290  299373 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1126 20:24:05.561333  299373 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1126 20:24:05.561384  299373 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1126 20:24:05.561449  299373 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1126 20:24:05.561540  299373 kubeadm.go:319] CGROUPS_IO: enabled
	I1126 20:24:05.561653  299373 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1126 20:24:05.561783  299373 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1126 20:24:05.561926  299373 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1126 20:24:05.562018  299373 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1126 20:24:05.563984  299373 out.go:252]   - Generating certificates and keys ...
	I1126 20:24:05.564074  299373 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1126 20:24:05.564177  299373 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1126 20:24:05.564248  299373 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1126 20:24:05.564294  299373 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1126 20:24:05.564347  299373 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1126 20:24:05.564389  299373 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1126 20:24:05.564432  299373 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1126 20:24:05.564541  299373 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [kindnet-825702 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1126 20:24:05.564585  299373 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1126 20:24:05.564702  299373 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [kindnet-825702 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1126 20:24:05.564805  299373 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1126 20:24:05.564921  299373 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1126 20:24:05.564988  299373 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1126 20:24:05.565063  299373 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1126 20:24:05.565145  299373 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1126 20:24:05.565216  299373 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1126 20:24:05.565289  299373 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1126 20:24:05.565397  299373 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1126 20:24:05.565443  299373 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1126 20:24:05.565585  299373 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1126 20:24:05.565682  299373 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1126 20:24:05.566779  299373 out.go:252]   - Booting up control plane ...
	I1126 20:24:05.566904  299373 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1126 20:24:05.566981  299373 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1126 20:24:05.567055  299373 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1126 20:24:05.567174  299373 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1126 20:24:05.567298  299373 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1126 20:24:05.567476  299373 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1126 20:24:05.567593  299373 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1126 20:24:05.567641  299373 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1126 20:24:05.567821  299373 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1126 20:24:05.567992  299373 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1126 20:24:05.568079  299373 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.00138774s
	I1126 20:24:05.568195  299373 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1126 20:24:05.568309  299373 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1126 20:24:05.568438  299373 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1126 20:24:05.568584  299373 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1126 20:24:05.568656  299373 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.468919721s
	I1126 20:24:05.568714  299373 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.276974193s
	I1126 20:24:05.568780  299373 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001757807s
	I1126 20:24:05.568875  299373 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1126 20:24:05.568992  299373 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1126 20:24:05.569052  299373 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1126 20:24:05.569292  299373 kubeadm.go:319] [mark-control-plane] Marking the node kindnet-825702 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1126 20:24:05.569372  299373 kubeadm.go:319] [bootstrap-token] Using token: 8fbp9l.w4uvcyj7kukg5ymm
	I1126 20:24:05.570505  299373 out.go:252]   - Configuring RBAC rules ...
	I1126 20:24:05.570598  299373 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1126 20:24:05.570691  299373 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1126 20:24:05.570870  299373 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1126 20:24:05.571032  299373 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1126 20:24:05.571199  299373 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1126 20:24:05.571285  299373 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1126 20:24:05.571387  299373 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1126 20:24:05.571432  299373 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1126 20:24:05.571485  299373 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1126 20:24:05.571491  299373 kubeadm.go:319] 
	I1126 20:24:05.571537  299373 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1126 20:24:05.571543  299373 kubeadm.go:319] 
	I1126 20:24:05.571607  299373 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1126 20:24:05.571618  299373 kubeadm.go:319] 
	I1126 20:24:05.571642  299373 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1126 20:24:05.571704  299373 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1126 20:24:05.571751  299373 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1126 20:24:05.571757  299373 kubeadm.go:319] 
	I1126 20:24:05.571803  299373 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1126 20:24:05.571808  299373 kubeadm.go:319] 
	I1126 20:24:05.571844  299373 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1126 20:24:05.571850  299373 kubeadm.go:319] 
	I1126 20:24:05.571907  299373 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1126 20:24:05.571977  299373 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1126 20:24:05.572070  299373 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1126 20:24:05.572088  299373 kubeadm.go:319] 
	I1126 20:24:05.572191  299373 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1126 20:24:05.572299  299373 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1126 20:24:05.572311  299373 kubeadm.go:319] 
	I1126 20:24:05.572411  299373 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 8fbp9l.w4uvcyj7kukg5ymm \
	I1126 20:24:05.572537  299373 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:9cd90c79a54686c211eb288f3e905de978807b759c474caa965189d211d6551b \
	I1126 20:24:05.572565  299373 kubeadm.go:319] 	--control-plane 
	I1126 20:24:05.572572  299373 kubeadm.go:319] 
	I1126 20:24:05.572655  299373 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1126 20:24:05.572661  299373 kubeadm.go:319] 
	I1126 20:24:05.572724  299373 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 8fbp9l.w4uvcyj7kukg5ymm \
	I1126 20:24:05.572825  299373 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:9cd90c79a54686c211eb288f3e905de978807b759c474caa965189d211d6551b 
	I1126 20:24:05.572839  299373 cni.go:84] Creating CNI manager for "kindnet"
	I1126 20:24:05.574695  299373 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1126 20:24:06.105903  301922 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1126 20:24:06.105969  301922 kubeadm.go:319] [preflight] Running pre-flight checks
	I1126 20:24:06.106080  301922 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1126 20:24:06.106153  301922 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1126 20:24:06.106199  301922 kubeadm.go:319] OS: Linux
	I1126 20:24:06.106256  301922 kubeadm.go:319] CGROUPS_CPU: enabled
	I1126 20:24:06.106316  301922 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1126 20:24:06.106379  301922 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1126 20:24:06.106438  301922 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1126 20:24:06.106600  301922 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1126 20:24:06.106682  301922 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1126 20:24:06.106752  301922 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1126 20:24:06.106804  301922 kubeadm.go:319] CGROUPS_IO: enabled
	I1126 20:24:06.106933  301922 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1126 20:24:06.107078  301922 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1126 20:24:06.107223  301922 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1126 20:24:06.107339  301922 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1126 20:24:06.109423  301922 out.go:252]   - Generating certificates and keys ...
	I1126 20:24:06.109642  301922 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1126 20:24:06.109957  301922 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1126 20:24:06.110091  301922 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1126 20:24:06.110216  301922 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1126 20:24:06.110304  301922 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1126 20:24:06.110373  301922 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1126 20:24:06.110447  301922 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1126 20:24:06.110634  301922 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [calico-825702 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1126 20:24:06.110711  301922 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1126 20:24:06.110897  301922 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [calico-825702 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1126 20:24:06.111002  301922 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1126 20:24:06.111093  301922 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1126 20:24:06.111164  301922 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1126 20:24:06.111284  301922 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1126 20:24:06.111386  301922 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1126 20:24:06.111488  301922 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1126 20:24:06.111569  301922 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1126 20:24:06.111670  301922 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1126 20:24:06.111759  301922 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1126 20:24:06.111888  301922 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1126 20:24:06.111989  301922 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1126 20:24:06.114082  301922 out.go:252]   - Booting up control plane ...
	I1126 20:24:06.114212  301922 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1126 20:24:06.114345  301922 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1126 20:24:06.114434  301922 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1126 20:24:06.114935  301922 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1126 20:24:06.115057  301922 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1126 20:24:06.115165  301922 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1126 20:24:06.115272  301922 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1126 20:24:06.115327  301922 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1126 20:24:06.115535  301922 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1126 20:24:06.115695  301922 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1126 20:24:06.115783  301922 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001310189s
	I1126 20:24:06.115915  301922 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1126 20:24:06.116028  301922 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1126 20:24:06.116161  301922 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1126 20:24:06.116282  301922 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1126 20:24:06.116376  301922 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.44254188s
	I1126 20:24:06.116565  301922 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.017919921s
	I1126 20:24:06.116655  301922 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501308517s
	I1126 20:24:06.116792  301922 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1126 20:24:06.116967  301922 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1126 20:24:06.117053  301922 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1126 20:24:06.117301  301922 kubeadm.go:319] [mark-control-plane] Marking the node calico-825702 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1126 20:24:06.117379  301922 kubeadm.go:319] [bootstrap-token] Using token: nxelx6.dmd09ypn6rh0xqme
	I1126 20:24:06.118781  301922 out.go:252]   - Configuring RBAC rules ...
	I1126 20:24:06.118904  301922 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1126 20:24:06.119004  301922 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1126 20:24:06.119188  301922 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1126 20:24:06.119345  301922 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1126 20:24:06.119528  301922 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1126 20:24:06.119668  301922 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1126 20:24:06.119883  301922 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1126 20:24:06.119940  301922 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1126 20:24:06.119992  301922 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1126 20:24:06.119998  301922 kubeadm.go:319] 
	I1126 20:24:06.120066  301922 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1126 20:24:06.120072  301922 kubeadm.go:319] 
	I1126 20:24:06.120221  301922 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1126 20:24:06.120236  301922 kubeadm.go:319] 
	I1126 20:24:06.120265  301922 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1126 20:24:06.120358  301922 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1126 20:24:06.120496  301922 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1126 20:24:06.120511  301922 kubeadm.go:319] 
	I1126 20:24:06.120574  301922 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1126 20:24:06.120579  301922 kubeadm.go:319] 
	I1126 20:24:06.120632  301922 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1126 20:24:06.120636  301922 kubeadm.go:319] 
	I1126 20:24:06.120699  301922 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1126 20:24:06.120785  301922 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1126 20:24:06.120872  301922 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1126 20:24:06.120878  301922 kubeadm.go:319] 
	I1126 20:24:06.120983  301922 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1126 20:24:06.121079  301922 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1126 20:24:06.121084  301922 kubeadm.go:319] 
	I1126 20:24:06.121190  301922 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token nxelx6.dmd09ypn6rh0xqme \
	I1126 20:24:06.121315  301922 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:9cd90c79a54686c211eb288f3e905de978807b759c474caa965189d211d6551b \
	I1126 20:24:06.121341  301922 kubeadm.go:319] 	--control-plane 
	I1126 20:24:06.121344  301922 kubeadm.go:319] 
	I1126 20:24:06.121431  301922 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1126 20:24:06.121441  301922 kubeadm.go:319] 
	I1126 20:24:06.121547  301922 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token nxelx6.dmd09ypn6rh0xqme \
	I1126 20:24:06.121695  301922 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:9cd90c79a54686c211eb288f3e905de978807b759c474caa965189d211d6551b 
	I1126 20:24:06.121714  301922 cni.go:84] Creating CNI manager for "calico"
	I1126 20:24:06.122970  301922 out.go:179] * Configuring Calico (Container Networking Interface) ...
	I1126 20:24:06.124687  301922 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1126 20:24:06.124705  301922 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (329943 bytes)
	I1126 20:24:06.139843  301922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1126 20:24:06.955846  301922 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1126 20:24:06.955918  301922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:24:06.955948  301922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-825702 minikube.k8s.io/updated_at=2025_11_26T20_24_06_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970 minikube.k8s.io/name=calico-825702 minikube.k8s.io/primary=true
	I1126 20:24:07.042106  301922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:24:07.042215  301922 ops.go:34] apiserver oom_adj: -16
	I1126 20:24:05.575660  299373 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1126 20:24:05.580097  299373 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1126 20:24:05.580115  299373 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1126 20:24:05.593510  299373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1126 20:24:05.826807  299373 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1126 20:24:05.826855  299373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:24:05.826910  299373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-825702 minikube.k8s.io/updated_at=2025_11_26T20_24_05_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970 minikube.k8s.io/name=kindnet-825702 minikube.k8s.io/primary=true
	I1126 20:24:05.837100  299373 ops.go:34] apiserver oom_adj: -16
	I1126 20:24:05.910995  299373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:24:06.411222  299373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:24:06.911861  299373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:24:07.411795  299373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:24:07.911186  299373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:24:08.411346  299373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:24:08.911270  299373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:24:09.411603  299373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:24:09.911254  299373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:24:10.411633  299373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:24:10.493576  299373 kubeadm.go:1114] duration metric: took 4.666766806s to wait for elevateKubeSystemPrivileges
	I1126 20:24:10.493616  299373 kubeadm.go:403] duration metric: took 17.290775151s to StartCluster
	I1126 20:24:10.493636  299373 settings.go:142] acquiring lock: {Name:mkeab3f1dbcfb88a16fda32bff13c520a9a811ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:24:10.493702  299373 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21974-10722/kubeconfig
	I1126 20:24:10.495453  299373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/kubeconfig: {Name:mk6fc897a3190afd853d8f239c80394df10dbd39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:24:10.495742  299373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1126 20:24:10.495744  299373 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1126 20:24:10.495824  299373 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1126 20:24:10.495932  299373 addons.go:70] Setting storage-provisioner=true in profile "kindnet-825702"
	I1126 20:24:10.495950  299373 config.go:182] Loaded profile config "kindnet-825702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:24:10.495949  299373 addons.go:70] Setting default-storageclass=true in profile "kindnet-825702"
	I1126 20:24:10.495979  299373 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kindnet-825702"
	I1126 20:24:10.495954  299373 addons.go:239] Setting addon storage-provisioner=true in "kindnet-825702"
	I1126 20:24:10.496123  299373 host.go:66] Checking if "kindnet-825702" exists ...
	I1126 20:24:10.496375  299373 cli_runner.go:164] Run: docker container inspect kindnet-825702 --format={{.State.Status}}
	I1126 20:24:10.496707  299373 cli_runner.go:164] Run: docker container inspect kindnet-825702 --format={{.State.Status}}
	I1126 20:24:10.498481  299373 out.go:179] * Verifying Kubernetes components...
	I1126 20:24:10.499688  299373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:24:10.529336  299373 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1126 20:24:10.530732  299373 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:24:10.530754  299373 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1126 20:24:10.530810  299373 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-825702
	I1126 20:24:10.535265  299373 addons.go:239] Setting addon default-storageclass=true in "kindnet-825702"
	I1126 20:24:10.535312  299373 host.go:66] Checking if "kindnet-825702" exists ...
	I1126 20:24:10.535786  299373 cli_runner.go:164] Run: docker container inspect kindnet-825702 --format={{.State.Status}}
	I1126 20:24:10.574363  299373 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/kindnet-825702/id_rsa Username:docker}
	I1126 20:24:10.574983  299373 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1126 20:24:10.574998  299373 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1126 20:24:10.575059  299373 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-825702
	I1126 20:24:10.599532  299373 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/kindnet-825702/id_rsa Username:docker}
	I1126 20:24:10.639906  299373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1126 20:24:10.669584  299373 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:24:10.699641  299373 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:24:10.725550  299373 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1126 20:24:10.835683  299373 node_ready.go:35] waiting up to 15m0s for node "kindnet-825702" to be "Ready" ...
	I1126 20:24:10.836182  299373 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1126 20:24:11.050598  299373 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1126 20:24:07.543158  301922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:24:08.043066  301922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:24:08.542666  301922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:24:09.042574  301922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:24:09.542680  301922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:24:10.042319  301922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:24:10.543216  301922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:24:11.042681  301922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:24:11.543188  301922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:24:11.624033  301922 kubeadm.go:1114] duration metric: took 4.668175134s to wait for elevateKubeSystemPrivileges
	I1126 20:24:11.624071  301922 kubeadm.go:403] duration metric: took 15.01388346s to StartCluster
	I1126 20:24:11.624096  301922 settings.go:142] acquiring lock: {Name:mkeab3f1dbcfb88a16fda32bff13c520a9a811ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:24:11.624160  301922 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21974-10722/kubeconfig
	I1126 20:24:11.626192  301922 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-10722/kubeconfig: {Name:mk6fc897a3190afd853d8f239c80394df10dbd39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:24:11.626442  301922 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1126 20:24:11.626651  301922 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1126 20:24:11.626903  301922 config.go:182] Loaded profile config "calico-825702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:24:11.626880  301922 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1126 20:24:11.626987  301922 addons.go:70] Setting storage-provisioner=true in profile "calico-825702"
	I1126 20:24:11.627027  301922 addons.go:239] Setting addon storage-provisioner=true in "calico-825702"
	I1126 20:24:11.627039  301922 addons.go:70] Setting default-storageclass=true in profile "calico-825702"
	I1126 20:24:11.627087  301922 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "calico-825702"
	I1126 20:24:11.627058  301922 host.go:66] Checking if "calico-825702" exists ...
	I1126 20:24:11.627451  301922 cli_runner.go:164] Run: docker container inspect calico-825702 --format={{.State.Status}}
	I1126 20:24:11.627638  301922 cli_runner.go:164] Run: docker container inspect calico-825702 --format={{.State.Status}}
	I1126 20:24:11.629622  301922 out.go:179] * Verifying Kubernetes components...
	I1126 20:24:11.631032  301922 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:24:11.661644  301922 addons.go:239] Setting addon default-storageclass=true in "calico-825702"
	I1126 20:24:11.661694  301922 host.go:66] Checking if "calico-825702" exists ...
	I1126 20:24:11.662164  301922 cli_runner.go:164] Run: docker container inspect calico-825702 --format={{.State.Status}}
	I1126 20:24:11.666864  301922 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1126 20:24:11.669519  301922 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:24:11.670544  301922 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1126 20:24:11.670685  301922 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-825702
	I1126 20:24:11.702767  301922 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1126 20:24:11.702787  301922 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1126 20:24:11.702859  301922 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-825702
	I1126 20:24:11.706879  301922 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/calico-825702/id_rsa Username:docker}
	I1126 20:24:11.723875  301922 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/calico-825702/id_rsa Username:docker}
	I1126 20:24:11.752254  301922 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1126 20:24:11.794332  301922 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:24:11.824148  301922 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:24:11.843061  301922 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1126 20:24:11.946968  301922 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1126 20:24:11.948471  301922 node_ready.go:35] waiting up to 15m0s for node "calico-825702" to be "Ready" ...
	I1126 20:24:12.168202  301922 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1126 20:24:12.169286  301922 addons.go:530] duration metric: took 542.404084ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1126 20:24:11.051711  299373 addons.go:530] duration metric: took 555.889438ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1126 20:24:11.342411  299373 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kindnet-825702" context rescaled to 1 replicas
	W1126 20:24:12.839283  299373 node_ready.go:57] node "kindnet-825702" has "Ready":"False" status (will retry)
	W1126 20:24:14.845082  299373 node_ready.go:57] node "kindnet-825702" has "Ready":"False" status (will retry)
	I1126 20:24:12.451657  301922 kapi.go:214] "coredns" deployment in "kube-system" namespace and "calico-825702" context rescaled to 1 replicas
	W1126 20:24:13.951968  301922 node_ready.go:57] node "calico-825702" has "Ready":"False" status (will retry)
	W1126 20:24:15.952159  301922 node_ready.go:57] node "calico-825702" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 26 20:24:01 default-k8s-diff-port-178152 crio[569]: time="2025-11-26T20:24:01.734002126Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 26 20:24:01 default-k8s-diff-port-178152 crio[569]: time="2025-11-26T20:24:01.734027118Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 26 20:24:01 default-k8s-diff-port-178152 crio[569]: time="2025-11-26T20:24:01.734049306Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 26 20:24:01 default-k8s-diff-port-178152 crio[569]: time="2025-11-26T20:24:01.738343928Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 26 20:24:01 default-k8s-diff-port-178152 crio[569]: time="2025-11-26T20:24:01.738365507Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 26 20:24:01 default-k8s-diff-port-178152 crio[569]: time="2025-11-26T20:24:01.738384673Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 26 20:24:01 default-k8s-diff-port-178152 crio[569]: time="2025-11-26T20:24:01.742668052Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 26 20:24:01 default-k8s-diff-port-178152 crio[569]: time="2025-11-26T20:24:01.742693087Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 26 20:24:01 default-k8s-diff-port-178152 crio[569]: time="2025-11-26T20:24:01.742713899Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 26 20:24:01 default-k8s-diff-port-178152 crio[569]: time="2025-11-26T20:24:01.747656039Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 26 20:24:01 default-k8s-diff-port-178152 crio[569]: time="2025-11-26T20:24:01.747678488Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 26 20:24:01 default-k8s-diff-port-178152 crio[569]: time="2025-11-26T20:24:01.747695681Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 26 20:24:01 default-k8s-diff-port-178152 crio[569]: time="2025-11-26T20:24:01.752275393Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 26 20:24:01 default-k8s-diff-port-178152 crio[569]: time="2025-11-26T20:24:01.752296685Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 26 20:24:07 default-k8s-diff-port-178152 crio[569]: time="2025-11-26T20:24:07.994263516Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=0cbbb5e5-30ee-4235-8723-1853af081aa0 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:24:07 default-k8s-diff-port-178152 crio[569]: time="2025-11-26T20:24:07.995114905Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b20c75e5-bb7d-4397-ae70-47e4455a76f2 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:24:07 default-k8s-diff-port-178152 crio[569]: time="2025-11-26T20:24:07.996192209Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lsm8j/dashboard-metrics-scraper" id=0d621099-8e78-4968-a2d0-5e7e22110b5f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:24:07 default-k8s-diff-port-178152 crio[569]: time="2025-11-26T20:24:07.996320674Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:24:08 default-k8s-diff-port-178152 crio[569]: time="2025-11-26T20:24:08.002876564Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:24:08 default-k8s-diff-port-178152 crio[569]: time="2025-11-26T20:24:08.003553783Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:24:08 default-k8s-diff-port-178152 crio[569]: time="2025-11-26T20:24:08.034431219Z" level=info msg="Created container 32fb436af28ae1c55272f8ad63af213d1dbd9affbdb2dbe7945dedf2c076fc21: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lsm8j/dashboard-metrics-scraper" id=0d621099-8e78-4968-a2d0-5e7e22110b5f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:24:08 default-k8s-diff-port-178152 crio[569]: time="2025-11-26T20:24:08.035216498Z" level=info msg="Starting container: 32fb436af28ae1c55272f8ad63af213d1dbd9affbdb2dbe7945dedf2c076fc21" id=3477fe42-9872-4eef-a710-e0460b695ff0 name=/runtime.v1.RuntimeService/StartContainer
	Nov 26 20:24:08 default-k8s-diff-port-178152 crio[569]: time="2025-11-26T20:24:08.037221429Z" level=info msg="Started container" PID=1825 containerID=32fb436af28ae1c55272f8ad63af213d1dbd9affbdb2dbe7945dedf2c076fc21 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lsm8j/dashboard-metrics-scraper id=3477fe42-9872-4eef-a710-e0460b695ff0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=017ef0ec5922b7ad11bf31114bb4c491873f46267549097b30bafd19b0cd4886
	Nov 26 20:24:08 default-k8s-diff-port-178152 crio[569]: time="2025-11-26T20:24:08.161315256Z" level=info msg="Removing container: 73af847d2ee3aeea34d4b065604680772a964119e5745198841fdf44c04ac818" id=c03688e1-243f-42b7-bea4-3faf224c389a name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 26 20:24:08 default-k8s-diff-port-178152 crio[569]: time="2025-11-26T20:24:08.170831185Z" level=info msg="Removed container 73af847d2ee3aeea34d4b065604680772a964119e5745198841fdf44c04ac818: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lsm8j/dashboard-metrics-scraper" id=c03688e1-243f-42b7-bea4-3faf224c389a name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	32fb436af28ae       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           12 seconds ago       Exited              dashboard-metrics-scraper   3                   017ef0ec5922b       dashboard-metrics-scraper-6ffb444bf9-lsm8j             kubernetes-dashboard
	4a7a55586fdac       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           28 seconds ago       Running             storage-provisioner         1                   1c32bc080f106       storage-provisioner                                    kube-system
	a2ca658bde7be       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   51 seconds ago       Running             kubernetes-dashboard        0                   888289f9667b9       kubernetes-dashboard-855c9754f9-m2nr4                  kubernetes-dashboard
	b5b54b11bd45b       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           58 seconds ago       Running             coredns                     0                   9504e470cabe5       coredns-66bc5c9577-tpmmm                               kube-system
	5f9228ddca102       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           58 seconds ago       Running             kindnet-cni                 0                   2f7e294ddcea4       kindnet-bmzz2                                          kube-system
	ca65c52a7e15d       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           58 seconds ago       Running             kube-proxy                  0                   9c19c36e89c69       kube-proxy-vd7fp                                       kube-system
	3656d4f58204b       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           58 seconds ago       Running             busybox                     1                   11b80ea4b81b1       busybox                                                default
	e8ac49bc740f4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           58 seconds ago       Exited              storage-provisioner         0                   1c32bc080f106       storage-provisioner                                    kube-system
	851ab28993a8b       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           About a minute ago   Running             kube-controller-manager     0                   930252c9e9174       kube-controller-manager-default-k8s-diff-port-178152   kube-system
	cd9d1e4467356       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           About a minute ago   Running             kube-apiserver              0                   bfc3619e67336       kube-apiserver-default-k8s-diff-port-178152            kube-system
	45aa87e14b73b       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           About a minute ago   Running             etcd                        0                   25f8fb31ba381       etcd-default-k8s-diff-port-178152                      kube-system
	53d64e031f1a8       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           About a minute ago   Running             kube-scheduler              0                   a2c21fb760357       kube-scheduler-default-k8s-diff-port-178152            kube-system
	
	
	==> coredns [b5b54b11bd45b447bfaecefe94487f516b269bf58598eb7dcfa18af5edc1612e] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60584 - 19012 "HINFO IN 8096409391269669939.2220905388687540780. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.460755858s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-178152
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-178152
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970
	                    minikube.k8s.io/name=default-k8s-diff-port-178152
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_26T20_22_26_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 26 Nov 2025 20:22:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-178152
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 26 Nov 2025 20:24:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 26 Nov 2025 20:24:11 +0000   Wed, 26 Nov 2025 20:22:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 26 Nov 2025 20:24:11 +0000   Wed, 26 Nov 2025 20:22:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 26 Nov 2025 20:24:11 +0000   Wed, 26 Nov 2025 20:22:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 26 Nov 2025 20:24:11 +0000   Wed, 26 Nov 2025 20:22:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-178152
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                d91795ef-51fb-4835-abf4-4b138b22a490
	  Boot ID:                    4ab83601-3d95-4490-96bc-14b416ba2714
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-66bc5c9577-tpmmm                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     109s
	  kube-system                 etcd-default-k8s-diff-port-178152                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         115s
	  kube-system                 kindnet-bmzz2                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      110s
	  kube-system                 kube-apiserver-default-k8s-diff-port-178152             250m (3%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-178152    200m (2%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-proxy-vd7fp                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-scheduler-default-k8s-diff-port-178152             100m (1%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-lsm8j              0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-m2nr4                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 108s               kube-proxy       
	  Normal  Starting                 58s                kube-proxy       
	  Normal  NodeHasSufficientMemory  115s               kubelet          Node default-k8s-diff-port-178152 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    115s               kubelet          Node default-k8s-diff-port-178152 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     115s               kubelet          Node default-k8s-diff-port-178152 status is now: NodeHasSufficientPID
	  Normal  Starting                 115s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           111s               node-controller  Node default-k8s-diff-port-178152 event: Registered Node default-k8s-diff-port-178152 in Controller
	  Normal  NodeReady                98s                kubelet          Node default-k8s-diff-port-178152 status is now: NodeReady
	  Normal  Starting                 63s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  62s (x8 over 63s)  kubelet          Node default-k8s-diff-port-178152 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    62s (x8 over 63s)  kubelet          Node default-k8s-diff-port-178152 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     62s (x8 over 63s)  kubelet          Node default-k8s-diff-port-178152 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           57s                node-controller  Node default-k8s-diff-port-178152 event: Registered Node default-k8s-diff-port-178152 in Controller
	
	
	==> dmesg <==
	[  +0.091832] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023778] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.010162] kauditd_printk_skb: 47 callbacks suppressed
	[Nov26 19:37] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.011177] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.022896] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.024869] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.022915] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +1.023864] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +2.047793] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +4.032568] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[  +8.126198] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[ +16.382331] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	[Nov26 19:38] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa da 11 49 93 d9 d6 fc c5 7d df aa 08 00
	
	
	==> etcd [45aa87e14b73bdfe289340af54adbd31ea4129fb3bbd2a635ed0f95587efea73] <==
	{"level":"warn","ts":"2025-11-26T20:23:19.363522Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:23:19.372808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:23:19.380682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:23:19.386744Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:23:19.415575Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:23:19.423419Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:23:19.434006Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:23:19.444992Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:23:19.450164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:23:19.458899Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:23:19.468540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:23:19.478966Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:23:19.492370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:23:19.508690Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:23:19.517180Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:23:19.528919Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:23:19.535979Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:23:19.542434Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:23:19.604121Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45164","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-26T20:23:51.079035Z","caller":"traceutil/trace.go:172","msg":"trace[2022983347] transaction","detail":"{read_only:false; response_revision:610; number_of_response:1; }","duration":"106.373018ms","start":"2025-11-26T20:23:50.972643Z","end":"2025-11-26T20:23:51.079016Z","steps":["trace[2022983347] 'process raft request'  (duration: 106.251042ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-26T20:23:51.274721Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"122.870037ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-26T20:23:51.274804Z","caller":"traceutil/trace.go:172","msg":"trace[598037595] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:611; }","duration":"122.962634ms","start":"2025-11-26T20:23:51.151824Z","end":"2025-11-26T20:23:51.274787Z","steps":["trace[598037595] 'agreement among raft nodes before linearized reading'  (duration: 91.104634ms)","trace[598037595] 'range keys from in-memory index tree'  (duration: 31.734178ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-26T20:23:51.274833Z","caller":"traceutil/trace.go:172","msg":"trace[1194689814] transaction","detail":"{read_only:false; response_revision:612; number_of_response:1; }","duration":"166.970505ms","start":"2025-11-26T20:23:51.107844Z","end":"2025-11-26T20:23:51.274814Z","steps":["trace[1194689814] 'process raft request'  (duration: 135.124521ms)","trace[1194689814] 'compare'  (duration: 31.675879ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-26T20:23:51.645499Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"221.57607ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-26T20:23:51.645571Z","caller":"traceutil/trace.go:172","msg":"trace[1417153719] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:613; }","duration":"221.66038ms","start":"2025-11-26T20:23:51.423897Z","end":"2025-11-26T20:23:51.645557Z","steps":["trace[1417153719] 'range keys from in-memory index tree'  (duration: 221.45153ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:24:20 up  1:06,  0 user,  load average: 10.08, 4.97, 2.79
	Linux default-k8s-diff-port-178152 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5f9228ddca10228c8bcb4f8b1d2167718f80e4d928f6cbae8a8ee24504ece49e] <==
	I1126 20:23:21.518894       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1126 20:23:21.519113       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1126 20:23:21.519268       1 main.go:148] setting mtu 1500 for CNI 
	I1126 20:23:21.519286       1 main.go:178] kindnetd IP family: "ipv4"
	I1126 20:23:21.519298       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-26T20:23:21Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1126 20:23:21.721969       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1126 20:23:21.721995       1 controller.go:381] "Waiting for informer caches to sync"
	I1126 20:23:21.722006       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1126 20:23:21.722117       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1126 20:23:51.723079       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1126 20:23:51.723094       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1126 20:23:51.723079       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1126 20:23:51.723085       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1126 20:23:53.122663       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1126 20:23:53.122693       1 metrics.go:72] Registering metrics
	I1126 20:23:53.122754       1 controller.go:711] "Syncing nftables rules"
	I1126 20:24:01.722413       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1126 20:24:01.722452       1 main.go:301] handling current node
	I1126 20:24:11.722388       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1126 20:24:11.722443       1 main.go:301] handling current node
	
	
	==> kube-apiserver [cd9d1e44673562de24369667807437c6a3c635356a7d55a049742bc674b8d8bb] <==
	I1126 20:23:20.130717       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1126 20:23:20.130724       1 cache.go:39] Caches are synced for autoregister controller
	I1126 20:23:20.130862       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1126 20:23:20.130898       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1126 20:23:20.130910       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1126 20:23:20.131679       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1126 20:23:20.131713       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1126 20:23:20.131853       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1126 20:23:20.131866       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1126 20:23:20.132275       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1126 20:23:20.133300       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1126 20:23:20.153018       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1126 20:23:20.153042       1 policy_source.go:240] refreshing policies
	I1126 20:23:20.189170       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1126 20:23:20.446074       1 controller.go:667] quota admission added evaluator for: namespaces
	I1126 20:23:20.475157       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1126 20:23:20.505488       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1126 20:23:20.511744       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1126 20:23:20.518588       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1126 20:23:20.545797       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.74.101"}
	I1126 20:23:20.554552       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.179.105"}
	I1126 20:23:21.027682       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1126 20:23:24.058889       1 controller.go:667] quota admission added evaluator for: endpoints
	I1126 20:23:24.109362       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1126 20:23:24.161261       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [851ab28993a8b771e5c0a2f66a99e6f22fbce78ae80259d843eb935540e459cb] <==
	I1126 20:23:23.612272       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1126 20:23:23.612309       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1126 20:23:23.612320       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1126 20:23:23.612327       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1126 20:23:23.618603       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1126 20:23:23.621726       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1126 20:23:23.626995       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1126 20:23:23.627012       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1126 20:23:23.627022       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1126 20:23:23.629237       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1126 20:23:23.631524       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1126 20:23:23.632706       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1126 20:23:23.648919       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1126 20:23:23.651068       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1126 20:23:23.652429       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1126 20:23:23.654688       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1126 20:23:23.655887       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1126 20:23:23.655925       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1126 20:23:23.656092       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1126 20:23:23.657582       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1126 20:23:23.662370       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1126 20:23:23.662507       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1126 20:23:23.662616       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-178152"
	I1126 20:23:23.662683       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1126 20:23:23.671687       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [ca65c52a7e15d14f87ed86b2027a3434dde86bb3c0239a70ce563e361f2bf410] <==
	I1126 20:23:21.387573       1 server_linux.go:53] "Using iptables proxy"
	I1126 20:23:21.450349       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1126 20:23:21.550508       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1126 20:23:21.550540       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1126 20:23:21.550614       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1126 20:23:21.569528       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1126 20:23:21.569578       1 server_linux.go:132] "Using iptables Proxier"
	I1126 20:23:21.574472       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1126 20:23:21.574853       1 server.go:527] "Version info" version="v1.34.1"
	I1126 20:23:21.574877       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 20:23:21.577572       1 config.go:106] "Starting endpoint slice config controller"
	I1126 20:23:21.577613       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1126 20:23:21.577649       1 config.go:200] "Starting service config controller"
	I1126 20:23:21.577656       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1126 20:23:21.577581       1 config.go:403] "Starting serviceCIDR config controller"
	I1126 20:23:21.577897       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1126 20:23:21.577962       1 config.go:309] "Starting node config controller"
	I1126 20:23:21.577969       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1126 20:23:21.678758       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1126 20:23:21.678888       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1126 20:23:21.678910       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1126 20:23:21.678919       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [53d64e031f1a8bdf5d5b01443fbdf38f958b0ba7ea8c6514663f714eb5cedebf] <==
	I1126 20:23:19.410368       1 serving.go:386] Generated self-signed cert in-memory
	I1126 20:23:20.352851       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1126 20:23:20.352878       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 20:23:20.357837       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1126 20:23:20.357930       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1126 20:23:20.357845       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1126 20:23:20.357976       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1126 20:23:20.357839       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1126 20:23:20.358013       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1126 20:23:20.358277       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1126 20:23:20.358310       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1126 20:23:20.458531       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1126 20:23:20.458575       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1126 20:23:20.458529       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 26 20:23:29 default-k8s-diff-port-178152 kubelet[731]: I1126 20:23:29.055580     731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-m2nr4" podStartSLOduration=1.256444008 podStartE2EDuration="5.055557549s" podCreationTimestamp="2025-11-26 20:23:24 +0000 UTC" firstStartedPulling="2025-11-26 20:23:24.564365409 +0000 UTC m=+6.664085249" lastFinishedPulling="2025-11-26 20:23:28.363478926 +0000 UTC m=+10.463198790" observedRunningTime="2025-11-26 20:23:29.055138909 +0000 UTC m=+11.154858770" watchObservedRunningTime="2025-11-26 20:23:29.055557549 +0000 UTC m=+11.155277411"
	Nov 26 20:23:30 default-k8s-diff-port-178152 kubelet[731]: I1126 20:23:30.966918     731 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 26 20:23:31 default-k8s-diff-port-178152 kubelet[731]: I1126 20:23:31.049468     731 scope.go:117] "RemoveContainer" containerID="f9993fb2e1fb3b48476ed7b65f18f08f394cc9edf8167b63f659583295ad63a9"
	Nov 26 20:23:32 default-k8s-diff-port-178152 kubelet[731]: I1126 20:23:32.053764     731 scope.go:117] "RemoveContainer" containerID="f9993fb2e1fb3b48476ed7b65f18f08f394cc9edf8167b63f659583295ad63a9"
	Nov 26 20:23:32 default-k8s-diff-port-178152 kubelet[731]: I1126 20:23:32.053900     731 scope.go:117] "RemoveContainer" containerID="8ee1f2132cf75e0705f7199185208475b96dc6b024bf2675725957b2672db5d6"
	Nov 26 20:23:32 default-k8s-diff-port-178152 kubelet[731]: E1126 20:23:32.054088     731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lsm8j_kubernetes-dashboard(6c175785-959c-450c-9696-0881dcaaf217)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lsm8j" podUID="6c175785-959c-450c-9696-0881dcaaf217"
	Nov 26 20:23:33 default-k8s-diff-port-178152 kubelet[731]: I1126 20:23:33.057274     731 scope.go:117] "RemoveContainer" containerID="8ee1f2132cf75e0705f7199185208475b96dc6b024bf2675725957b2672db5d6"
	Nov 26 20:23:33 default-k8s-diff-port-178152 kubelet[731]: E1126 20:23:33.057482     731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lsm8j_kubernetes-dashboard(6c175785-959c-450c-9696-0881dcaaf217)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lsm8j" podUID="6c175785-959c-450c-9696-0881dcaaf217"
	Nov 26 20:23:34 default-k8s-diff-port-178152 kubelet[731]: I1126 20:23:34.957029     731 scope.go:117] "RemoveContainer" containerID="8ee1f2132cf75e0705f7199185208475b96dc6b024bf2675725957b2672db5d6"
	Nov 26 20:23:34 default-k8s-diff-port-178152 kubelet[731]: E1126 20:23:34.957290     731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lsm8j_kubernetes-dashboard(6c175785-959c-450c-9696-0881dcaaf217)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lsm8j" podUID="6c175785-959c-450c-9696-0881dcaaf217"
	Nov 26 20:23:45 default-k8s-diff-port-178152 kubelet[731]: I1126 20:23:45.994130     731 scope.go:117] "RemoveContainer" containerID="8ee1f2132cf75e0705f7199185208475b96dc6b024bf2675725957b2672db5d6"
	Nov 26 20:23:46 default-k8s-diff-port-178152 kubelet[731]: I1126 20:23:46.091327     731 scope.go:117] "RemoveContainer" containerID="8ee1f2132cf75e0705f7199185208475b96dc6b024bf2675725957b2672db5d6"
	Nov 26 20:23:46 default-k8s-diff-port-178152 kubelet[731]: I1126 20:23:46.091565     731 scope.go:117] "RemoveContainer" containerID="73af847d2ee3aeea34d4b065604680772a964119e5745198841fdf44c04ac818"
	Nov 26 20:23:46 default-k8s-diff-port-178152 kubelet[731]: E1126 20:23:46.091756     731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lsm8j_kubernetes-dashboard(6c175785-959c-450c-9696-0881dcaaf217)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lsm8j" podUID="6c175785-959c-450c-9696-0881dcaaf217"
	Nov 26 20:23:52 default-k8s-diff-port-178152 kubelet[731]: I1126 20:23:52.113765     731 scope.go:117] "RemoveContainer" containerID="e8ac49bc740f47f5f32f520593ad8e80e3fef725e3ab711117dd927d45c20eb3"
	Nov 26 20:23:54 default-k8s-diff-port-178152 kubelet[731]: I1126 20:23:54.956730     731 scope.go:117] "RemoveContainer" containerID="73af847d2ee3aeea34d4b065604680772a964119e5745198841fdf44c04ac818"
	Nov 26 20:23:54 default-k8s-diff-port-178152 kubelet[731]: E1126 20:23:54.956903     731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lsm8j_kubernetes-dashboard(6c175785-959c-450c-9696-0881dcaaf217)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lsm8j" podUID="6c175785-959c-450c-9696-0881dcaaf217"
	Nov 26 20:24:07 default-k8s-diff-port-178152 kubelet[731]: I1126 20:24:07.993893     731 scope.go:117] "RemoveContainer" containerID="73af847d2ee3aeea34d4b065604680772a964119e5745198841fdf44c04ac818"
	Nov 26 20:24:08 default-k8s-diff-port-178152 kubelet[731]: I1126 20:24:08.159372     731 scope.go:117] "RemoveContainer" containerID="73af847d2ee3aeea34d4b065604680772a964119e5745198841fdf44c04ac818"
	Nov 26 20:24:08 default-k8s-diff-port-178152 kubelet[731]: I1126 20:24:08.159625     731 scope.go:117] "RemoveContainer" containerID="32fb436af28ae1c55272f8ad63af213d1dbd9affbdb2dbe7945dedf2c076fc21"
	Nov 26 20:24:08 default-k8s-diff-port-178152 kubelet[731]: E1126 20:24:08.159835     731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lsm8j_kubernetes-dashboard(6c175785-959c-450c-9696-0881dcaaf217)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lsm8j" podUID="6c175785-959c-450c-9696-0881dcaaf217"
	Nov 26 20:24:14 default-k8s-diff-port-178152 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 26 20:24:14 default-k8s-diff-port-178152 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 26 20:24:14 default-k8s-diff-port-178152 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 26 20:24:14 default-k8s-diff-port-178152 systemd[1]: kubelet.service: Consumed 1.748s CPU time.
	
	
	==> kubernetes-dashboard [a2ca658bde7be15ad908300df3903014b37b72b558d8856ce799024e45c11ee0] <==
	2025/11/26 20:23:28 Starting overwatch
	2025/11/26 20:23:28 Using namespace: kubernetes-dashboard
	2025/11/26 20:23:28 Using in-cluster config to connect to apiserver
	2025/11/26 20:23:28 Using secret token for csrf signing
	2025/11/26 20:23:28 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/26 20:23:28 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/26 20:23:28 Successful initial request to the apiserver, version: v1.34.1
	2025/11/26 20:23:28 Generating JWE encryption key
	2025/11/26 20:23:28 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/26 20:23:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/26 20:23:28 Initializing JWE encryption key from synchronized object
	2025/11/26 20:23:28 Creating in-cluster Sidecar client
	2025/11/26 20:23:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/26 20:23:28 Serving insecurely on HTTP port: 9090
	2025/11/26 20:23:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [4a7a55586fdacfafe83753690495f1e47b964f7c04ba8d593fac2ce9f15e9dd8] <==
	I1126 20:23:52.194789       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1126 20:23:52.194901       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1126 20:23:52.197607       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:23:55.653149       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:23:59.914145       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:24:03.513112       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:24:06.567167       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:24:09.590270       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:24:09.600631       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1126 20:24:09.601016       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1126 20:24:09.601508       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-178152_aa7fe461-e867-4347-a0a6-7207eabe3e57!
	I1126 20:24:09.601649       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0bea3325-c523-4ea4-89b9-0b2d778812eb", APIVersion:"v1", ResourceVersion:"635", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-178152_aa7fe461-e867-4347-a0a6-7207eabe3e57 became leader
	W1126 20:24:09.609171       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:24:09.616259       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1126 20:24:09.702284       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-178152_aa7fe461-e867-4347-a0a6-7207eabe3e57!
	W1126 20:24:11.619273       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:24:11.626066       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:24:13.630010       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:24:13.634009       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:24:15.636906       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:24:15.640546       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:24:17.644076       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:24:17.648825       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:24:19.651657       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:24:19.656067       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [e8ac49bc740f47f5f32f520593ad8e80e3fef725e3ab711117dd927d45c20eb3] <==
	I1126 20:23:21.352988       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1126 20:23:51.356798       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-178152 -n default-k8s-diff-port-178152
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-178152 -n default-k8s-diff-port-178152: exit status 2 (404.293291ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-178152 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (7.07s)

                                                
                                    

Test pass (264/328)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 4.7
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 3.95
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.07
18 TestDownloadOnly/v1.34.1/DeleteAll 0.21
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.13
20 TestDownloadOnlyKic 0.38
21 TestBinaryMirror 0.8
22 TestOffline 80.27
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 128.07
31 TestAddons/serial/GCPAuth/Namespaces 0.15
32 TestAddons/serial/GCPAuth/FakeCredentials 8.41
48 TestAddons/StoppedEnableDisable 16.61
49 TestCertOptions 22.59
50 TestCertExpiration 208.82
52 TestForceSystemdFlag 24.54
53 TestForceSystemdEnv 36.34
58 TestErrorSpam/setup 22.95
59 TestErrorSpam/start 0.63
60 TestErrorSpam/status 0.92
61 TestErrorSpam/pause 6.51
62 TestErrorSpam/unpause 5.76
63 TestErrorSpam/stop 18.03
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 37.66
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 6.02
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.1
74 TestFunctional/serial/CacheCmd/cache/add_remote 2.89
75 TestFunctional/serial/CacheCmd/cache/add_local 1.13
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.52
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.12
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 66.67
84 TestFunctional/serial/ComponentHealth 0.06
85 TestFunctional/serial/LogsCmd 1.13
86 TestFunctional/serial/LogsFileCmd 1.14
87 TestFunctional/serial/InvalidService 4.14
89 TestFunctional/parallel/ConfigCmd 0.44
90 TestFunctional/parallel/DashboardCmd 9.07
91 TestFunctional/parallel/DryRun 0.42
92 TestFunctional/parallel/InternationalLanguage 0.17
93 TestFunctional/parallel/StatusCmd 1.04
98 TestFunctional/parallel/AddonsCmd 0.14
99 TestFunctional/parallel/PersistentVolumeClaim 27.44
101 TestFunctional/parallel/SSHCmd 0.59
102 TestFunctional/parallel/CpCmd 1.8
103 TestFunctional/parallel/MySQL 15.33
104 TestFunctional/parallel/FileSync 0.28
105 TestFunctional/parallel/CertSync 1.59
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.64
113 TestFunctional/parallel/License 0.44
115 TestFunctional/parallel/ProfileCmd/profile_not_create 0.51
116 TestFunctional/parallel/MountCmd/any-port 5.65
117 TestFunctional/parallel/ProfileCmd/profile_list 0.44
118 TestFunctional/parallel/ProfileCmd/profile_json_output 0.43
119 TestFunctional/parallel/Version/short 0.13
120 TestFunctional/parallel/Version/components 0.57
121 TestFunctional/parallel/MountCmd/specific-port 1.64
122 TestFunctional/parallel/MountCmd/VerifyCleanup 1.87
123 TestFunctional/parallel/ImageCommands/ImageListShort 0.33
124 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
125 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
126 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
127 TestFunctional/parallel/ImageCommands/ImageBuild 2.14
128 TestFunctional/parallel/ImageCommands/Setup 0.95
131 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.48
133 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
135 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.24
138 TestFunctional/parallel/ImageCommands/ImageRemove 0.48
141 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
142 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
146 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
147 TestFunctional/parallel/UpdateContextCmd/no_changes 0.15
148 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
149 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
150 TestFunctional/parallel/ServiceCmd/List 1.69
151 TestFunctional/parallel/ServiceCmd/JSONOutput 1.69
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.01
157 TestFunctional/delete_minikube_cached_images 0.01
162 TestMultiControlPlane/serial/StartCluster 141.4
163 TestMultiControlPlane/serial/DeployApp 3.9
164 TestMultiControlPlane/serial/PingHostFromPods 1
165 TestMultiControlPlane/serial/AddWorkerNode 53.86
166 TestMultiControlPlane/serial/NodeLabels 0.06
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.86
168 TestMultiControlPlane/serial/CopyFile 16.54
169 TestMultiControlPlane/serial/StopSecondaryNode 19.72
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.69
171 TestMultiControlPlane/serial/RestartSecondaryNode 8.36
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.86
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 106.42
174 TestMultiControlPlane/serial/DeleteSecondaryNode 10.42
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.67
176 TestMultiControlPlane/serial/StopCluster 43.84
177 TestMultiControlPlane/serial/RestartCluster 56.26
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.67
179 TestMultiControlPlane/serial/AddSecondaryNode 70.28
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.86
185 TestJSONOutput/start/Command 67.55
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 6.04
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.22
210 TestKicCustomNetwork/create_custom_network 26.3
211 TestKicCustomNetwork/use_default_bridge_network 22.16
212 TestKicExistingNetwork 23.42
213 TestKicCustomSubnet 23.71
214 TestKicStaticIP 25.38
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 47.6
219 TestMountStart/serial/StartWithMountFirst 4.66
220 TestMountStart/serial/VerifyMountFirst 0.26
221 TestMountStart/serial/StartWithMountSecond 7.61
222 TestMountStart/serial/VerifyMountSecond 0.26
223 TestMountStart/serial/DeleteFirst 1.65
224 TestMountStart/serial/VerifyMountPostDelete 0.27
225 TestMountStart/serial/Stop 1.25
226 TestMountStart/serial/RestartStopped 7.26
227 TestMountStart/serial/VerifyMountPostStop 0.26
230 TestMultiNode/serial/FreshStart2Nodes 91.69
231 TestMultiNode/serial/DeployApp2Nodes 3.44
232 TestMultiNode/serial/PingHostFrom2Pods 0.68
233 TestMultiNode/serial/AddNode 22.33
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.63
236 TestMultiNode/serial/CopyFile 9.46
237 TestMultiNode/serial/StopNode 2.19
238 TestMultiNode/serial/StartAfterStop 6.99
239 TestMultiNode/serial/RestartKeepsNodes 56.64
240 TestMultiNode/serial/DeleteNode 4.94
241 TestMultiNode/serial/StopMultiNode 28.56
242 TestMultiNode/serial/RestartMultiNode 48.36
243 TestMultiNode/serial/ValidateNameConflict 26.49
248 TestPreload 108.93
250 TestScheduledStopUnix 95.37
253 TestInsufficientStorage 9.22
254 TestRunningBinaryUpgrade 47.67
256 TestKubernetesUpgrade 300.83
257 TestMissingContainerUpgrade 88.38
259 TestPause/serial/Start 77.98
261 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
262 TestNoKubernetes/serial/StartWithK8s 33.49
263 TestNoKubernetes/serial/StartWithStopK8s 23.64
264 TestNoKubernetes/serial/Start 4.18
265 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
266 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
267 TestNoKubernetes/serial/ProfileList 1.75
268 TestNoKubernetes/serial/Stop 1.27
269 TestNoKubernetes/serial/StartNoArgs 6.57
270 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
271 TestPause/serial/SecondStartNoReconfiguration 7.27
272 TestStoppedBinaryUpgrade/Setup 0.67
273 TestStoppedBinaryUpgrade/Upgrade 288.68
289 TestNetworkPlugins/group/false 3.48
294 TestStartStop/group/old-k8s-version/serial/FirstStart 49.65
295 TestStartStop/group/old-k8s-version/serial/DeployApp 8.28
297 TestStartStop/group/old-k8s-version/serial/Stop 16.11
298 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
299 TestStartStop/group/old-k8s-version/serial/SecondStart 26.15
300 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
301 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
302 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.22
305 TestStartStop/group/no-preload/serial/FirstStart 51.72
307 TestStartStop/group/embed-certs/serial/FirstStart 43.98
308 TestStoppedBinaryUpgrade/MinikubeLogs 1.04
310 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 40.48
311 TestStartStop/group/no-preload/serial/DeployApp 8.26
313 TestStartStop/group/newest-cni/serial/FirstStart 30.57
314 TestStartStop/group/embed-certs/serial/DeployApp 8.26
316 TestStartStop/group/no-preload/serial/Stop 16.44
318 TestStartStop/group/embed-certs/serial/Stop 18.08
319 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
320 TestStartStop/group/no-preload/serial/SecondStart 47.89
321 TestStartStop/group/newest-cni/serial/DeployApp 0
323 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.23
324 TestStartStop/group/embed-certs/serial/SecondStart 47.55
325 TestStartStop/group/newest-cni/serial/Stop 2.52
326 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.24
327 TestStartStop/group/newest-cni/serial/SecondStart 12.55
328 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 7.35
330 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
331 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
332 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.29
334 TestStartStop/group/default-k8s-diff-port/serial/Stop 16.36
335 TestNetworkPlugins/group/auto/Start 45.46
336 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
337 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 51.63
338 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
339 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
340 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
341 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
343 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
344 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
346 TestNetworkPlugins/group/kindnet/Start 44.58
347 TestNetworkPlugins/group/calico/Start 50.29
348 TestNetworkPlugins/group/auto/KubeletFlags 0.32
349 TestNetworkPlugins/group/auto/NetCatPod 10.25
350 TestNetworkPlugins/group/auto/DNS 0.11
351 TestNetworkPlugins/group/auto/Localhost 0.1
352 TestNetworkPlugins/group/auto/HairPin 0.09
353 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
354 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
355 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.29
357 TestNetworkPlugins/group/custom-flannel/Start 63.06
358 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
359 TestNetworkPlugins/group/enable-default-cni/Start 63.45
360 TestNetworkPlugins/group/kindnet/KubeletFlags 0.36
361 TestNetworkPlugins/group/kindnet/NetCatPod 10.95
362 TestNetworkPlugins/group/calico/ControllerPod 6.01
363 TestNetworkPlugins/group/kindnet/DNS 0.14
364 TestNetworkPlugins/group/kindnet/Localhost 0.12
365 TestNetworkPlugins/group/kindnet/HairPin 0.12
366 TestNetworkPlugins/group/calico/KubeletFlags 0.31
367 TestNetworkPlugins/group/calico/NetCatPod 9.18
368 TestNetworkPlugins/group/calico/DNS 0.12
369 TestNetworkPlugins/group/calico/Localhost 0.1
370 TestNetworkPlugins/group/calico/HairPin 0.1
371 TestNetworkPlugins/group/flannel/Start 43.37
372 TestNetworkPlugins/group/bridge/Start 39.97
373 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.32
374 TestNetworkPlugins/group/custom-flannel/NetCatPod 8.17
375 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.34
376 TestNetworkPlugins/group/custom-flannel/DNS 0.13
377 TestNetworkPlugins/group/custom-flannel/Localhost 0.1
378 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.26
379 TestNetworkPlugins/group/custom-flannel/HairPin 0.12
380 TestNetworkPlugins/group/enable-default-cni/DNS 0.11
381 TestNetworkPlugins/group/enable-default-cni/Localhost 0.1
382 TestNetworkPlugins/group/enable-default-cni/HairPin 0.09
383 TestNetworkPlugins/group/flannel/ControllerPod 6.01
384 TestNetworkPlugins/group/flannel/KubeletFlags 0.31
385 TestNetworkPlugins/group/flannel/NetCatPod 9.17
386 TestNetworkPlugins/group/bridge/KubeletFlags 0.36
387 TestNetworkPlugins/group/bridge/NetCatPod 9.22
388 TestNetworkPlugins/group/flannel/DNS 0.1
389 TestNetworkPlugins/group/flannel/Localhost 0.08
390 TestNetworkPlugins/group/flannel/HairPin 0.08
391 TestNetworkPlugins/group/bridge/DNS 0.1
392 TestNetworkPlugins/group/bridge/Localhost 0.08
393 TestNetworkPlugins/group/bridge/HairPin 0.08
x
+
TestDownloadOnly/v1.28.0/json-events (4.7s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-179609 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-179609 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.702513912s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (4.70s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1126 19:34:51.812718   14258 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1126 19:34:51.812802   14258 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21974-10722/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-179609
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-179609: exit status 85 (70.086513ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-179609 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-179609 │ jenkins │ v1.37.0 │ 26 Nov 25 19:34 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/26 19:34:47
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1126 19:34:47.158421   14270 out.go:360] Setting OutFile to fd 1 ...
	I1126 19:34:47.158515   14270 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:34:47.158523   14270 out.go:374] Setting ErrFile to fd 2...
	I1126 19:34:47.158527   14270 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:34:47.158698   14270 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-10722/.minikube/bin
	W1126 19:34:47.158801   14270 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21974-10722/.minikube/config/config.json: open /home/jenkins/minikube-integration/21974-10722/.minikube/config/config.json: no such file or directory
	I1126 19:34:47.159669   14270 out.go:368] Setting JSON to true
	I1126 19:34:47.160535   14270 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1037,"bootTime":1764184650,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1126 19:34:47.160582   14270 start.go:143] virtualization: kvm guest
	I1126 19:34:47.165537   14270 out.go:99] [download-only-179609] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1126 19:34:47.165660   14270 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21974-10722/.minikube/cache/preloaded-tarball: no such file or directory
	I1126 19:34:47.165709   14270 notify.go:221] Checking for updates...
	I1126 19:34:47.166770   14270 out.go:171] MINIKUBE_LOCATION=21974
	I1126 19:34:47.167906   14270 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1126 19:34:47.169142   14270 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21974-10722/kubeconfig
	I1126 19:34:47.170350   14270 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-10722/.minikube
	I1126 19:34:47.171525   14270 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1126 19:34:47.173636   14270 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1126 19:34:47.173884   14270 driver.go:422] Setting default libvirt URI to qemu:///system
	I1126 19:34:47.198564   14270 docker.go:124] docker version: linux-29.0.4:Docker Engine - Community
	I1126 19:34:47.198668   14270 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 19:34:47.593180   14270 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-11-26 19:34:47.584038466 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1126 19:34:47.593288   14270 docker.go:319] overlay module found
	I1126 19:34:47.594936   14270 out.go:99] Using the docker driver based on user configuration
	I1126 19:34:47.594965   14270 start.go:309] selected driver: docker
	I1126 19:34:47.594973   14270 start.go:927] validating driver "docker" against <nil>
	I1126 19:34:47.595050   14270 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 19:34:47.651691   14270 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-11-26 19:34:47.64325534 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1126 19:34:47.651877   14270 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1126 19:34:47.652431   14270 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1126 19:34:47.652607   14270 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1126 19:34:47.654281   14270 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-179609 host does not exist
	  To start a cluster, run: "minikube start -p download-only-179609"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-179609
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (3.95s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-602722 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-602722 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.953048594s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (3.95s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1126 19:34:56.183018   14258 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1126 19:34:56.183070   14258 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21974-10722/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-602722
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-602722: exit status 85 (68.40391ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-179609 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-179609 │ jenkins │ v1.37.0 │ 26 Nov 25 19:34 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 26 Nov 25 19:34 UTC │ 26 Nov 25 19:34 UTC │
	│ delete  │ -p download-only-179609                                                                                                                                                   │ download-only-179609 │ jenkins │ v1.37.0 │ 26 Nov 25 19:34 UTC │ 26 Nov 25 19:34 UTC │
	│ start   │ -o=json --download-only -p download-only-602722 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-602722 │ jenkins │ v1.37.0 │ 26 Nov 25 19:34 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/26 19:34:52
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1126 19:34:52.277174   14636 out.go:360] Setting OutFile to fd 1 ...
	I1126 19:34:52.277251   14636 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:34:52.277264   14636 out.go:374] Setting ErrFile to fd 2...
	I1126 19:34:52.277268   14636 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:34:52.277430   14636 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-10722/.minikube/bin
	I1126 19:34:52.277876   14636 out.go:368] Setting JSON to true
	I1126 19:34:52.278594   14636 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1042,"bootTime":1764184650,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1126 19:34:52.278638   14636 start.go:143] virtualization: kvm guest
	I1126 19:34:52.280332   14636 out.go:99] [download-only-602722] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1126 19:34:52.280443   14636 notify.go:221] Checking for updates...
	I1126 19:34:52.281498   14636 out.go:171] MINIKUBE_LOCATION=21974
	I1126 19:34:52.282768   14636 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1126 19:34:52.283960   14636 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21974-10722/kubeconfig
	I1126 19:34:52.285051   14636 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-10722/.minikube
	I1126 19:34:52.286072   14636 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1126 19:34:52.288200   14636 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1126 19:34:52.288417   14636 driver.go:422] Setting default libvirt URI to qemu:///system
	I1126 19:34:52.311238   14636 docker.go:124] docker version: linux-29.0.4:Docker Engine - Community
	I1126 19:34:52.311343   14636 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 19:34:52.367540   14636 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:52 SystemTime:2025-11-26 19:34:52.358939296 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1126 19:34:52.367642   14636 docker.go:319] overlay module found
	I1126 19:34:52.369014   14636 out.go:99] Using the docker driver based on user configuration
	I1126 19:34:52.369038   14636 start.go:309] selected driver: docker
	I1126 19:34:52.369046   14636 start.go:927] validating driver "docker" against <nil>
	I1126 19:34:52.369110   14636 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 19:34:52.425542   14636 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:52 SystemTime:2025-11-26 19:34:52.41653165 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1126 19:34:52.425680   14636 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1126 19:34:52.426129   14636 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1126 19:34:52.426273   14636 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1126 19:34:52.427764   14636 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-602722 host does not exist
	  To start a cluster, run: "minikube start -p download-only-602722"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-602722
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.38s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-444715 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-444715" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-444715
--- PASS: TestDownloadOnlyKic (0.38s)

                                                
                                    
x
+
TestBinaryMirror (0.8s)

                                                
                                                
=== RUN   TestBinaryMirror
I1126 19:34:57.247211   14258 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-671361 --alsologtostderr --binary-mirror http://127.0.0.1:46231 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-671361" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-671361
--- PASS: TestBinaryMirror (0.80s)

                                                
                                    
x
+
TestOffline (80.27s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-073078 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-073078 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m17.410090081s)
helpers_test.go:175: Cleaning up "offline-crio-073078" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-073078
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-073078: (2.855787394s)
--- PASS: TestOffline (80.27s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-368879
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-368879: exit status 85 (59.180828ms)

                                                
                                                
-- stdout --
	* Profile "addons-368879" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-368879"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-368879
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-368879: exit status 85 (60.119761ms)

                                                
                                                
-- stdout --
	* Profile "addons-368879" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-368879"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (128.07s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-368879 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-368879 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m8.069944005s)
--- PASS: TestAddons/Setup (128.07s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-368879 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-368879 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.41s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-368879 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-368879 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [d5bca682-20c4-4eb9-91cc-2278bde34e49] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [d5bca682-20c4-4eb9-91cc-2278bde34e49] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.003431329s
addons_test.go:694: (dbg) Run:  kubectl --context addons-368879 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-368879 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-368879 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.41s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (16.61s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-368879
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-368879: (16.343041066s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-368879
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-368879
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-368879
--- PASS: TestAddons/StoppedEnableDisable (16.61s)

                                                
                                    
x
+
TestCertOptions (22.59s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-706331 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-706331 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (19.525499376s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-706331 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-706331 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-706331 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-706331" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-706331
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-706331: (2.422241143s)
--- PASS: TestCertOptions (22.59s)

                                                
                                    
x
+
TestCertExpiration (208.82s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-571738 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
E1126 20:18:06.730482   14258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/functional-960066/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-571738 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (20.504261622s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-571738 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-571738 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (5.414898658s)
helpers_test.go:175: Cleaning up "cert-expiration-571738" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-571738
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-571738: (2.895781826s)
--- PASS: TestCertExpiration (208.82s)

                                                
                                    
x
+
TestForceSystemdFlag (24.54s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-845547 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-845547 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (21.896066177s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-845547 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-845547" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-845547
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-845547: (2.364259598s)
--- PASS: TestForceSystemdFlag (24.54s)

                                                
                                    
x
+
TestForceSystemdEnv (36.34s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-093715 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-093715 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (33.916430478s)
helpers_test.go:175: Cleaning up "force-systemd-env-093715" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-093715
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-093715: (2.419666595s)
--- PASS: TestForceSystemdEnv (36.34s)

                                                
                                    
x
+
TestErrorSpam/setup (22.95s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-558791 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-558791 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-558791 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-558791 --driver=docker  --container-runtime=crio: (22.95166871s)
--- PASS: TestErrorSpam/setup (22.95s)

                                                
                                    
x
+
TestErrorSpam/start (0.63s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-558791 --log_dir /tmp/nospam-558791 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-558791 --log_dir /tmp/nospam-558791 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-558791 --log_dir /tmp/nospam-558791 start --dry-run
--- PASS: TestErrorSpam/start (0.63s)

                                                
                                    
x
+
TestErrorSpam/status (0.92s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-558791 --log_dir /tmp/nospam-558791 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-558791 --log_dir /tmp/nospam-558791 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-558791 --log_dir /tmp/nospam-558791 status
--- PASS: TestErrorSpam/status (0.92s)

                                                
                                    
x
+
TestErrorSpam/pause (6.51s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-558791 --log_dir /tmp/nospam-558791 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-558791 --log_dir /tmp/nospam-558791 pause: exit status 80 (2.135391764s)

                                                
                                                
-- stdout --
	* Pausing node nospam-558791 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:40:30Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-558791 --log_dir /tmp/nospam-558791 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-558791 --log_dir /tmp/nospam-558791 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-558791 --log_dir /tmp/nospam-558791 pause: exit status 80 (2.037719507s)

                                                
                                                
-- stdout --
	* Pausing node nospam-558791 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:40:32Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-558791 --log_dir /tmp/nospam-558791 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-558791 --log_dir /tmp/nospam-558791 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-558791 --log_dir /tmp/nospam-558791 pause: exit status 80 (2.334841859s)

                                                
                                                
-- stdout --
	* Pausing node nospam-558791 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:40:34Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-558791 --log_dir /tmp/nospam-558791 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.51s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.76s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-558791 --log_dir /tmp/nospam-558791 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-558791 --log_dir /tmp/nospam-558791 unpause: exit status 80 (2.307535087s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-558791 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:40:36Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-558791 --log_dir /tmp/nospam-558791 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-558791 --log_dir /tmp/nospam-558791 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-558791 --log_dir /tmp/nospam-558791 unpause: exit status 80 (1.548637416s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-558791 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:40:38Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-558791 --log_dir /tmp/nospam-558791 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-558791 --log_dir /tmp/nospam-558791 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-558791 --log_dir /tmp/nospam-558791 unpause: exit status 80 (1.90374154s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-558791 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:40:40Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-558791 --log_dir /tmp/nospam-558791 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.76s)

                                                
                                    
x
+
TestErrorSpam/stop (18.03s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-558791 --log_dir /tmp/nospam-558791 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-558791 --log_dir /tmp/nospam-558791 stop: (17.840521623s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-558791 --log_dir /tmp/nospam-558791 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-558791 --log_dir /tmp/nospam-558791 stop
--- PASS: TestErrorSpam/stop (18.03s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21974-10722/.minikube/files/etc/test/nested/copy/14258/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (37.66s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-960066 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-960066 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (37.657499939s)
--- PASS: TestFunctional/serial/StartWithProxy (37.66s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.02s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1126 19:41:40.970349   14258 config.go:182] Loaded profile config "functional-960066": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-960066 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-960066 --alsologtostderr -v=8: (6.022271583s)
functional_test.go:678: soft start took 6.022939865s for "functional-960066" cluster.
I1126 19:41:46.992987   14258 config.go:182] Loaded profile config "functional-960066": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (6.02s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-960066 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.89s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-960066 cache add registry.k8s.io/pause:3.3: (1.072610439s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.89s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-960066 /tmp/TestFunctionalserialCacheCmdcacheadd_local712081890/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 cache add minikube-local-cache-test:functional-960066
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 cache delete minikube-local-cache-test:functional-960066
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-960066
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.52s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-960066 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (275.3106ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.52s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 kubectl -- --context functional-960066 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-960066 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (66.67s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-960066 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1126 19:42:06.755417   14258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:42:06.761753   14258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:42:06.773063   14258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:42:06.794377   14258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:42:06.835692   14258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:42:06.917039   14258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:42:07.078510   14258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:42:07.400182   14258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:42:08.042196   14258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:42:09.323785   14258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:42:11.886625   14258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:42:17.008080   14258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:42:27.249655   14258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:42:47.731604   14258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-960066 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m6.672559916s)
functional_test.go:776: restart took 1m6.672677504s for "functional-960066" cluster.
I1126 19:43:00.089072   14258 config.go:182] Loaded profile config "functional-960066": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (66.67s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-960066 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.13s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-960066 logs: (1.128294575s)
--- PASS: TestFunctional/serial/LogsCmd (1.13s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.14s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 logs --file /tmp/TestFunctionalserialLogsFileCmd408690203/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-960066 logs --file /tmp/TestFunctionalserialLogsFileCmd408690203/001/logs.txt: (1.135502579s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.14s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.14s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-960066 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-960066
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-960066: exit status 115 (329.374998ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30853 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-960066 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.14s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-960066 config get cpus: exit status 14 (81.756966ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-960066 config get cpus: exit status 14 (83.474781ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-960066 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-960066 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 48138: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.07s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-960066 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-960066 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (189.965455ms)

                                                
                                                
-- stdout --
	* [functional-960066] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21974
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21974-10722/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-10722/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1126 19:43:08.602151   47522 out.go:360] Setting OutFile to fd 1 ...
	I1126 19:43:08.602440   47522 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:43:08.602452   47522 out.go:374] Setting ErrFile to fd 2...
	I1126 19:43:08.602471   47522 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:43:08.602778   47522 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-10722/.minikube/bin
	I1126 19:43:08.603256   47522 out.go:368] Setting JSON to false
	I1126 19:43:08.604212   47522 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1539,"bootTime":1764184650,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1126 19:43:08.604270   47522 start.go:143] virtualization: kvm guest
	I1126 19:43:08.608570   47522 out.go:179] * [functional-960066] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1126 19:43:08.609837   47522 out.go:179]   - MINIKUBE_LOCATION=21974
	I1126 19:43:08.609837   47522 notify.go:221] Checking for updates...
	I1126 19:43:08.612165   47522 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1126 19:43:08.613276   47522 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21974-10722/kubeconfig
	I1126 19:43:08.614321   47522 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-10722/.minikube
	I1126 19:43:08.615370   47522 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1126 19:43:08.616499   47522 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1126 19:43:08.618288   47522 config.go:182] Loaded profile config "functional-960066": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:43:08.619029   47522 driver.go:422] Setting default libvirt URI to qemu:///system
	I1126 19:43:08.646650   47522 docker.go:124] docker version: linux-29.0.4:Docker Engine - Community
	I1126 19:43:08.646765   47522 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 19:43:08.711289   47522 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-26 19:43:08.700692832 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1126 19:43:08.711398   47522 docker.go:319] overlay module found
	I1126 19:43:08.712929   47522 out.go:179] * Using the docker driver based on existing profile
	I1126 19:43:08.714022   47522 start.go:309] selected driver: docker
	I1126 19:43:08.714038   47522 start.go:927] validating driver "docker" against &{Name:functional-960066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-960066 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 19:43:08.714188   47522 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1126 19:43:08.715897   47522 out.go:203] 
	W1126 19:43:08.716886   47522 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1126 19:43:08.717930   47522 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-960066 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-960066 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-960066 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (173.996847ms)

                                                
                                                
-- stdout --
	* [functional-960066] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21974
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21974-10722/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-10722/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1126 19:43:08.427149   47363 out.go:360] Setting OutFile to fd 1 ...
	I1126 19:43:08.427268   47363 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:43:08.427278   47363 out.go:374] Setting ErrFile to fd 2...
	I1126 19:43:08.427286   47363 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:43:08.427697   47363 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-10722/.minikube/bin
	I1126 19:43:08.428208   47363 out.go:368] Setting JSON to false
	I1126 19:43:08.429239   47363 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1538,"bootTime":1764184650,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1126 19:43:08.429295   47363 start.go:143] virtualization: kvm guest
	I1126 19:43:08.431094   47363 out.go:179] * [functional-960066] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1126 19:43:08.432398   47363 out.go:179]   - MINIKUBE_LOCATION=21974
	I1126 19:43:08.432400   47363 notify.go:221] Checking for updates...
	I1126 19:43:08.433547   47363 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1126 19:43:08.434835   47363 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21974-10722/kubeconfig
	I1126 19:43:08.435890   47363 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-10722/.minikube
	I1126 19:43:08.436971   47363 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1126 19:43:08.437969   47363 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1126 19:43:08.439587   47363 config.go:182] Loaded profile config "functional-960066": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:43:08.440431   47363 driver.go:422] Setting default libvirt URI to qemu:///system
	I1126 19:43:08.467722   47363 docker.go:124] docker version: linux-29.0.4:Docker Engine - Community
	I1126 19:43:08.467854   47363 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 19:43:08.523386   47363 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-26 19:43:08.512944852 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1126 19:43:08.523531   47363 docker.go:319] overlay module found
	I1126 19:43:08.524930   47363 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1126 19:43:08.525992   47363 start.go:309] selected driver: docker
	I1126 19:43:08.526012   47363 start.go:927] validating driver "docker" against &{Name:functional-960066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-960066 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 19:43:08.526116   47363 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1126 19:43:08.527950   47363 out.go:203] 
	W1126 19:43:08.529213   47363 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1126 19:43:08.530191   47363 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [5a9af2cd-bb13-4a8f-a93c-a7bcb0fb3a48] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004079213s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-960066 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-960066 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-960066 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-960066 apply -f testdata/storage-provisioner/pod.yaml
I1126 19:43:14.844600   14258 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [8ee56392-aebb-40df-b4ca-a36b6884ee6d] Pending
helpers_test.go:352: "sp-pod" [8ee56392-aebb-40df-b4ca-a36b6884ee6d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [8ee56392-aebb-40df-b4ca-a36b6884ee6d] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.002326139s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-960066 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-960066 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-960066 apply -f testdata/storage-provisioner/pod.yaml
E1126 19:43:28.693091   14258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1126 19:43:28.743381   14258 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [1b81cb7f-90ba-42d9-9793-8f46c4c7781b] Pending
helpers_test.go:352: "sp-pod" [1b81cb7f-90ba-42d9-9793-8f46c4c7781b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [1b81cb7f-90ba-42d9-9793-8f46c4c7781b] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003991443s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-960066 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.44s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 ssh -n functional-960066 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 cp functional-960066:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3282606125/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 ssh -n functional-960066 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 ssh -n functional-960066 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (15.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-960066 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-xv8rn" [97a69117-bc2e-4aae-a4d8-38303cc791a0] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-xv8rn" [97a69117-bc2e-4aae-a4d8-38303cc791a0] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 14.002635579s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-960066 exec mysql-5bb876957f-xv8rn -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-960066 exec mysql-5bb876957f-xv8rn -- mysql -ppassword -e "show databases;": exit status 1 (95.34406ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1126 19:43:45.664492   14258 retry.go:31] will retry after 1.001733174s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-960066 exec mysql-5bb876957f-xv8rn -- mysql -ppassword -e "show databases;"
E1126 19:44:50.614760   14258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:47:06.753721   14258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:47:34.456527   14258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:52:06.753742   14258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/MySQL (15.33s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/14258/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 ssh "sudo cat /etc/test/nested/copy/14258/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/14258.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 ssh "sudo cat /etc/ssl/certs/14258.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/14258.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 ssh "sudo cat /usr/share/ca-certificates/14258.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/142582.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 ssh "sudo cat /etc/ssl/certs/142582.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/142582.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 ssh "sudo cat /usr/share/ca-certificates/142582.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.59s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-960066 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-960066 ssh "sudo systemctl is-active docker": exit status 1 (312.254105ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-960066 ssh "sudo systemctl is-active containerd": exit status 1 (327.970907ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-960066 /tmp/TestFunctionalparallelMountCmdany-port4052461515/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1764186187000552097" to /tmp/TestFunctionalparallelMountCmdany-port4052461515/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1764186187000552097" to /tmp/TestFunctionalparallelMountCmdany-port4052461515/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1764186187000552097" to /tmp/TestFunctionalparallelMountCmdany-port4052461515/001/test-1764186187000552097
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-960066 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (317.831483ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1126 19:43:07.318877   14258 retry.go:31] will retry after 297.776012ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 26 19:43 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 26 19:43 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 26 19:43 test-1764186187000552097
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 ssh cat /mount-9p/test-1764186187000552097
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-960066 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [4e7975b9-6405-47df-a96c-c46f7d52fdba] Pending
helpers_test.go:352: "busybox-mount" [4e7975b9-6405-47df-a96c-c46f7d52fdba] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [4e7975b9-6405-47df-a96c-c46f7d52fdba] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [4e7975b9-6405-47df-a96c-c46f7d52fdba] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 3.002615041s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-960066 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-960066 /tmp/TestFunctionalparallelMountCmdany-port4052461515/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.65s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "381.706141ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "62.008561ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "361.81402ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "69.869916ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 version --short
--- PASS: TestFunctional/parallel/Version/short (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-960066 /tmp/TestFunctionalparallelMountCmdspecific-port334606524/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-960066 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (290.418498ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1126 19:43:12.938401   14258 retry.go:31] will retry after 341.778371ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-960066 /tmp/TestFunctionalparallelMountCmdspecific-port334606524/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-960066 ssh "sudo umount -f /mount-9p": exit status 1 (262.088809ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-960066 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-960066 /tmp/TestFunctionalparallelMountCmdspecific-port334606524/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.64s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-960066 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3671231235/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-960066 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3671231235/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-960066 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3671231235/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-960066 ssh "findmnt -T" /mount1: exit status 1 (347.909307ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1126 19:43:14.638252   14258 retry.go:31] will retry after 615.896485ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-960066 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-960066 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3671231235/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-960066 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3671231235/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-960066 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3671231235/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-960066 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-960066 image ls --format short --alsologtostderr:
I1126 19:43:37.627208   53501 out.go:360] Setting OutFile to fd 1 ...
I1126 19:43:37.627513   53501 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1126 19:43:37.627520   53501 out.go:374] Setting ErrFile to fd 2...
I1126 19:43:37.627526   53501 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1126 19:43:37.627805   53501 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-10722/.minikube/bin
I1126 19:43:37.628549   53501 config.go:182] Loaded profile config "functional-960066": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1126 19:43:37.628722   53501 config.go:182] Loaded profile config "functional-960066": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1126 19:43:37.629335   53501 cli_runner.go:164] Run: docker container inspect functional-960066 --format={{.State.Status}}
I1126 19:43:37.650250   53501 ssh_runner.go:195] Run: systemctl --version
I1126 19:43:37.650300   53501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-960066
I1126 19:43:37.670275   53501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/functional-960066/id_rsa Username:docker}
I1126 19:43:37.775596   53501 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-960066 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ localhost/my-image                      │ functional-960066  │ c515e7378a192 │ 1.47MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ docker.io/library/nginx                 │ alpine             │ d4918ca78576a │ 54.3MB │
│ docker.io/library/nginx                 │ latest             │ 60adc2e137e75 │ 155MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-960066 image ls --format table --alsologtostderr:
I1126 19:43:40.520806   54344 out.go:360] Setting OutFile to fd 1 ...
I1126 19:43:40.521078   54344 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1126 19:43:40.521088   54344 out.go:374] Setting ErrFile to fd 2...
I1126 19:43:40.521093   54344 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1126 19:43:40.521248   54344 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-10722/.minikube/bin
I1126 19:43:40.521728   54344 config.go:182] Loaded profile config "functional-960066": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1126 19:43:40.521815   54344 config.go:182] Loaded profile config "functional-960066": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1126 19:43:40.522189   54344 cli_runner.go:164] Run: docker container inspect functional-960066 --format={{.State.Status}}
I1126 19:43:40.539621   54344 ssh_runner.go:195] Run: systemctl --version
I1126 19:43:40.539658   54344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-960066
I1126 19:43:40.556188   54344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/functional-960066/id_rsa Username:docker}
I1126 19:43:40.652620   54344 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-960066 image ls --format json --alsologtostderr:
[{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d6
1779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/cor
edns:v1.12.1"],"size":"76103547"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"i
d":"d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9","repoDigests":["docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7","docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54252718"},{"id":"60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42","docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541"],"repoTags":["docker.io/library/nginx:latest"],"size":"155491845"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDiges
ts":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"c515e7378a19279f5cdd009b363bef64c6836dd068bb7cdc7115e77164a8aab1","repoDigests":["localhost/my-image@sha256:b2b1dd57cfeea577b283a20962267d30b8d2db6f28725086d9335d83486b46c2"],"repoTags":["localhost/my-image:functional-960066"],"size":"1468744"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a
7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec
84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"f59e9e29c0eba715a61c648277660821d967e258713e359f8e37abe70827a513","repoDigests":["docker.io/library/d45a9c8d4700c1bb5696c53fb62e039127c90c3f54938aca7399fe6b25d30928-tmp@sha256:efca0835b59478f2db6131c2bd2bd42e2731bd71bab73ee17132e40663578952"],"repoTags":[],"size":"1466132"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-960066 image ls --format json --alsologtostderr:
I1126 19:43:40.305063   54289 out.go:360] Setting OutFile to fd 1 ...
I1126 19:43:40.305163   54289 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1126 19:43:40.305171   54289 out.go:374] Setting ErrFile to fd 2...
I1126 19:43:40.305175   54289 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1126 19:43:40.305346   54289 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-10722/.minikube/bin
I1126 19:43:40.305841   54289 config.go:182] Loaded profile config "functional-960066": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1126 19:43:40.305928   54289 config.go:182] Loaded profile config "functional-960066": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1126 19:43:40.306306   54289 cli_runner.go:164] Run: docker container inspect functional-960066 --format={{.State.Status}}
I1126 19:43:40.323116   54289 ssh_runner.go:195] Run: systemctl --version
I1126 19:43:40.323155   54289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-960066
I1126 19:43:40.340567   54289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/functional-960066/id_rsa Username:docker}
I1126 19:43:40.436497   54289 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-960066 image ls --format yaml --alsologtostderr:
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
- docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541
repoTags:
- docker.io/library/nginx:latest
size: "155491845"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests:
- docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "54252718"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-960066 image ls --format yaml --alsologtostderr:
I1126 19:43:37.942434   53585 out.go:360] Setting OutFile to fd 1 ...
I1126 19:43:37.942772   53585 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1126 19:43:37.942785   53585 out.go:374] Setting ErrFile to fd 2...
I1126 19:43:37.942796   53585 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1126 19:43:37.943086   53585 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-10722/.minikube/bin
I1126 19:43:37.943843   53585 config.go:182] Loaded profile config "functional-960066": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1126 19:43:37.943996   53585 config.go:182] Loaded profile config "functional-960066": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1126 19:43:37.944531   53585 cli_runner.go:164] Run: docker container inspect functional-960066 --format={{.State.Status}}
I1126 19:43:37.963788   53585 ssh_runner.go:195] Run: systemctl --version
I1126 19:43:37.963835   53585 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-960066
I1126 19:43:37.981988   53585 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/functional-960066/id_rsa Username:docker}
I1126 19:43:38.077693   53585 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-960066 ssh pgrep buildkitd: exit status 1 (270.746824ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 image build -t localhost/my-image:functional-960066 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-960066 image build -t localhost/my-image:functional-960066 testdata/build --alsologtostderr: (1.64890747s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-960066 image build -t localhost/my-image:functional-960066 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> f59e9e29c0e
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-960066
--> c515e7378a1
Successfully tagged localhost/my-image:functional-960066
c515e7378a19279f5cdd009b363bef64c6836dd068bb7cdc7115e77164a8aab1
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-960066 image build -t localhost/my-image:functional-960066 testdata/build --alsologtostderr:
I1126 19:43:38.440674   53810 out.go:360] Setting OutFile to fd 1 ...
I1126 19:43:38.440935   53810 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1126 19:43:38.440945   53810 out.go:374] Setting ErrFile to fd 2...
I1126 19:43:38.440949   53810 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1126 19:43:38.441118   53810 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-10722/.minikube/bin
I1126 19:43:38.441638   53810 config.go:182] Loaded profile config "functional-960066": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1126 19:43:38.442172   53810 config.go:182] Loaded profile config "functional-960066": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1126 19:43:38.442636   53810 cli_runner.go:164] Run: docker container inspect functional-960066 --format={{.State.Status}}
I1126 19:43:38.460199   53810 ssh_runner.go:195] Run: systemctl --version
I1126 19:43:38.460248   53810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-960066
I1126 19:43:38.478293   53810 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/functional-960066/id_rsa Username:docker}
I1126 19:43:38.576939   53810 build_images.go:162] Building image from path: /tmp/build.3289516637.tar
I1126 19:43:38.577040   53810 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1126 19:43:38.584809   53810 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3289516637.tar
I1126 19:43:38.588192   53810 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3289516637.tar: stat -c "%s %y" /var/lib/minikube/build/build.3289516637.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3289516637.tar': No such file or directory
I1126 19:43:38.588214   53810 ssh_runner.go:362] scp /tmp/build.3289516637.tar --> /var/lib/minikube/build/build.3289516637.tar (3072 bytes)
I1126 19:43:38.605838   53810 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3289516637
I1126 19:43:38.613110   53810 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3289516637 -xf /var/lib/minikube/build/build.3289516637.tar
I1126 19:43:38.620665   53810 crio.go:315] Building image: /var/lib/minikube/build/build.3289516637
I1126 19:43:38.620718   53810 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-960066 /var/lib/minikube/build/build.3289516637 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1126 19:43:40.010503   53810 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-960066 /var/lib/minikube/build/build.3289516637 --cgroup-manager=cgroupfs: (1.389757338s)
I1126 19:43:40.010568   53810 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3289516637
I1126 19:43:40.018657   53810 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3289516637.tar
I1126 19:43:40.025625   53810 build_images.go:218] Built localhost/my-image:functional-960066 from /tmp/build.3289516637.tar
I1126 19:43:40.025655   53810 build_images.go:134] succeeded building to: functional-960066
I1126 19:43:40.025661   53810 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-960066
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-960066 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-960066 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-960066 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-960066 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 50861: os: process already finished
helpers_test.go:519: unable to terminate pid 50644: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-960066 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-960066 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [f9eb134c-44f9-4649-baa5-17305feb4764] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [f9eb134c-44f9-4649-baa5-17305feb4764] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.003651245s
I1126 19:43:29.392284   14258 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 image rm kicbase/echo-server:functional-960066 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-960066 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.100.56.85 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-960066 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-960066 service list: (1.690019994s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-960066 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-960066 service list -o json: (1.691767661s)
functional_test.go:1504: Took "1.691844416s" to run "out/minikube-linux-amd64 -p functional-960066 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.69s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-960066
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-960066
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-960066
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (141.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-156828 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m20.715088903s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (141.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (3.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-156828 kubectl -- rollout status deployment/busybox: (2.021475558s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 kubectl -- exec busybox-7b57f96db7-lb2gn -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 kubectl -- exec busybox-7b57f96db7-s95kc -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 kubectl -- exec busybox-7b57f96db7-vnxm2 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 kubectl -- exec busybox-7b57f96db7-lb2gn -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 kubectl -- exec busybox-7b57f96db7-s95kc -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 kubectl -- exec busybox-7b57f96db7-vnxm2 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 kubectl -- exec busybox-7b57f96db7-lb2gn -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 kubectl -- exec busybox-7b57f96db7-s95kc -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 kubectl -- exec busybox-7b57f96db7-vnxm2 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (3.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 kubectl -- exec busybox-7b57f96db7-lb2gn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 kubectl -- exec busybox-7b57f96db7-lb2gn -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 kubectl -- exec busybox-7b57f96db7-s95kc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 kubectl -- exec busybox-7b57f96db7-s95kc -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 kubectl -- exec busybox-7b57f96db7-vnxm2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 kubectl -- exec busybox-7b57f96db7-vnxm2 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (53.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-156828 node add --alsologtostderr -v 5: (53.010505554s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (53.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-156828 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (16.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 cp testdata/cp-test.txt ha-156828:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 ssh -n ha-156828 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 cp ha-156828:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3385916061/001/cp-test_ha-156828.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 ssh -n ha-156828 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 cp ha-156828:/home/docker/cp-test.txt ha-156828-m02:/home/docker/cp-test_ha-156828_ha-156828-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 ssh -n ha-156828 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 ssh -n ha-156828-m02 "sudo cat /home/docker/cp-test_ha-156828_ha-156828-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 cp ha-156828:/home/docker/cp-test.txt ha-156828-m03:/home/docker/cp-test_ha-156828_ha-156828-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 ssh -n ha-156828 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 ssh -n ha-156828-m03 "sudo cat /home/docker/cp-test_ha-156828_ha-156828-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 cp ha-156828:/home/docker/cp-test.txt ha-156828-m04:/home/docker/cp-test_ha-156828_ha-156828-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 ssh -n ha-156828 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 ssh -n ha-156828-m04 "sudo cat /home/docker/cp-test_ha-156828_ha-156828-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 cp testdata/cp-test.txt ha-156828-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 ssh -n ha-156828-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 cp ha-156828-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3385916061/001/cp-test_ha-156828-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 ssh -n ha-156828-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 cp ha-156828-m02:/home/docker/cp-test.txt ha-156828:/home/docker/cp-test_ha-156828-m02_ha-156828.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 ssh -n ha-156828-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 ssh -n ha-156828 "sudo cat /home/docker/cp-test_ha-156828-m02_ha-156828.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 cp ha-156828-m02:/home/docker/cp-test.txt ha-156828-m03:/home/docker/cp-test_ha-156828-m02_ha-156828-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 ssh -n ha-156828-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 ssh -n ha-156828-m03 "sudo cat /home/docker/cp-test_ha-156828-m02_ha-156828-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 cp ha-156828-m02:/home/docker/cp-test.txt ha-156828-m04:/home/docker/cp-test_ha-156828-m02_ha-156828-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 ssh -n ha-156828-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 ssh -n ha-156828-m04 "sudo cat /home/docker/cp-test_ha-156828-m02_ha-156828-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 cp testdata/cp-test.txt ha-156828-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 ssh -n ha-156828-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 cp ha-156828-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3385916061/001/cp-test_ha-156828-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 ssh -n ha-156828-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 cp ha-156828-m03:/home/docker/cp-test.txt ha-156828:/home/docker/cp-test_ha-156828-m03_ha-156828.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 ssh -n ha-156828-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 ssh -n ha-156828 "sudo cat /home/docker/cp-test_ha-156828-m03_ha-156828.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 cp ha-156828-m03:/home/docker/cp-test.txt ha-156828-m02:/home/docker/cp-test_ha-156828-m03_ha-156828-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 ssh -n ha-156828-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 ssh -n ha-156828-m02 "sudo cat /home/docker/cp-test_ha-156828-m03_ha-156828-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 cp ha-156828-m03:/home/docker/cp-test.txt ha-156828-m04:/home/docker/cp-test_ha-156828-m03_ha-156828-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 ssh -n ha-156828-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 ssh -n ha-156828-m04 "sudo cat /home/docker/cp-test_ha-156828-m03_ha-156828-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 cp testdata/cp-test.txt ha-156828-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 ssh -n ha-156828-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 cp ha-156828-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3385916061/001/cp-test_ha-156828-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 ssh -n ha-156828-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 cp ha-156828-m04:/home/docker/cp-test.txt ha-156828:/home/docker/cp-test_ha-156828-m04_ha-156828.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 ssh -n ha-156828-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 ssh -n ha-156828 "sudo cat /home/docker/cp-test_ha-156828-m04_ha-156828.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 cp ha-156828-m04:/home/docker/cp-test.txt ha-156828-m02:/home/docker/cp-test_ha-156828-m04_ha-156828-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 ssh -n ha-156828-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 ssh -n ha-156828-m02 "sudo cat /home/docker/cp-test_ha-156828-m04_ha-156828-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 cp ha-156828-m04:/home/docker/cp-test.txt ha-156828-m03:/home/docker/cp-test_ha-156828-m04_ha-156828-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 ssh -n ha-156828-m04 "sudo cat /home/docker/cp-test.txt"
E1126 19:57:06.753812   14258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 ssh -n ha-156828-m03 "sudo cat /home/docker/cp-test_ha-156828-m04_ha-156828-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (16.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (19.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-156828 node stop m02 --alsologtostderr -v 5: (19.049542977s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-156828 status --alsologtostderr -v 5: exit status 7 (671.279218ms)

                                                
                                                
-- stdout --
	ha-156828
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-156828-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-156828-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-156828-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1126 19:57:26.241970   78715 out.go:360] Setting OutFile to fd 1 ...
	I1126 19:57:26.242245   78715 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:57:26.242255   78715 out.go:374] Setting ErrFile to fd 2...
	I1126 19:57:26.242261   78715 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:57:26.242441   78715 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-10722/.minikube/bin
	I1126 19:57:26.242631   78715 out.go:368] Setting JSON to false
	I1126 19:57:26.242659   78715 mustload.go:66] Loading cluster: ha-156828
	I1126 19:57:26.242766   78715 notify.go:221] Checking for updates...
	I1126 19:57:26.243303   78715 config.go:182] Loaded profile config "ha-156828": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:57:26.243320   78715 status.go:174] checking status of ha-156828 ...
	I1126 19:57:26.243942   78715 cli_runner.go:164] Run: docker container inspect ha-156828 --format={{.State.Status}}
	I1126 19:57:26.262071   78715 status.go:371] ha-156828 host status = "Running" (err=<nil>)
	I1126 19:57:26.262091   78715 host.go:66] Checking if "ha-156828" exists ...
	I1126 19:57:26.262328   78715 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-156828
	I1126 19:57:26.281397   78715 host.go:66] Checking if "ha-156828" exists ...
	I1126 19:57:26.281619   78715 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1126 19:57:26.281676   78715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-156828
	I1126 19:57:26.297052   78715 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/ha-156828/id_rsa Username:docker}
	I1126 19:57:26.392314   78715 ssh_runner.go:195] Run: systemctl --version
	I1126 19:57:26.398831   78715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 19:57:26.410317   78715 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 19:57:26.467529   78715 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-26 19:57:26.457482282 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1126 19:57:26.468053   78715 kubeconfig.go:125] found "ha-156828" server: "https://192.168.49.254:8443"
	I1126 19:57:26.468081   78715 api_server.go:166] Checking apiserver status ...
	I1126 19:57:26.468127   78715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 19:57:26.479691   78715 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1237/cgroup
	W1126 19:57:26.487645   78715 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1237/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1126 19:57:26.487679   78715 ssh_runner.go:195] Run: ls
	I1126 19:57:26.491011   78715 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1126 19:57:26.495610   78715 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1126 19:57:26.495631   78715 status.go:463] ha-156828 apiserver status = Running (err=<nil>)
	I1126 19:57:26.495641   78715 status.go:176] ha-156828 status: &{Name:ha-156828 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1126 19:57:26.495677   78715 status.go:174] checking status of ha-156828-m02 ...
	I1126 19:57:26.495894   78715 cli_runner.go:164] Run: docker container inspect ha-156828-m02 --format={{.State.Status}}
	I1126 19:57:26.512919   78715 status.go:371] ha-156828-m02 host status = "Stopped" (err=<nil>)
	I1126 19:57:26.512935   78715 status.go:384] host is not running, skipping remaining checks
	I1126 19:57:26.512940   78715 status.go:176] ha-156828-m02 status: &{Name:ha-156828-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1126 19:57:26.512957   78715 status.go:174] checking status of ha-156828-m03 ...
	I1126 19:57:26.513230   78715 cli_runner.go:164] Run: docker container inspect ha-156828-m03 --format={{.State.Status}}
	I1126 19:57:26.529307   78715 status.go:371] ha-156828-m03 host status = "Running" (err=<nil>)
	I1126 19:57:26.529325   78715 host.go:66] Checking if "ha-156828-m03" exists ...
	I1126 19:57:26.529582   78715 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-156828-m03
	I1126 19:57:26.546519   78715 host.go:66] Checking if "ha-156828-m03" exists ...
	I1126 19:57:26.546753   78715 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1126 19:57:26.546815   78715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-156828-m03
	I1126 19:57:26.563270   78715 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/ha-156828-m03/id_rsa Username:docker}
	I1126 19:57:26.658186   78715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 19:57:26.671021   78715 kubeconfig.go:125] found "ha-156828" server: "https://192.168.49.254:8443"
	I1126 19:57:26.671043   78715 api_server.go:166] Checking apiserver status ...
	I1126 19:57:26.671072   78715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 19:57:26.681674   78715 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1192/cgroup
	W1126 19:57:26.689382   78715 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1192/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1126 19:57:26.689450   78715 ssh_runner.go:195] Run: ls
	I1126 19:57:26.692707   78715 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1126 19:57:26.697148   78715 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1126 19:57:26.697164   78715 status.go:463] ha-156828-m03 apiserver status = Running (err=<nil>)
	I1126 19:57:26.697177   78715 status.go:176] ha-156828-m03 status: &{Name:ha-156828-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1126 19:57:26.697206   78715 status.go:174] checking status of ha-156828-m04 ...
	I1126 19:57:26.697422   78715 cli_runner.go:164] Run: docker container inspect ha-156828-m04 --format={{.State.Status}}
	I1126 19:57:26.714902   78715 status.go:371] ha-156828-m04 host status = "Running" (err=<nil>)
	I1126 19:57:26.714919   78715 host.go:66] Checking if "ha-156828-m04" exists ...
	I1126 19:57:26.715161   78715 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-156828-m04
	I1126 19:57:26.732277   78715 host.go:66] Checking if "ha-156828-m04" exists ...
	I1126 19:57:26.732528   78715 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1126 19:57:26.732564   78715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-156828-m04
	I1126 19:57:26.748663   78715 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/ha-156828-m04/id_rsa Username:docker}
	I1126 19:57:26.842995   78715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 19:57:26.854834   78715 status.go:176] ha-156828-m04 status: &{Name:ha-156828-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (19.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (8.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-156828 node start m02 --alsologtostderr -v 5: (7.460944107s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (8.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (106.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 stop --alsologtostderr -v 5
E1126 19:58:06.734624   14258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/functional-960066/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:58:06.743566   14258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/functional-960066/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:58:06.755191   14258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/functional-960066/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:58:06.776541   14258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/functional-960066/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:58:06.817941   14258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/functional-960066/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:58:06.899399   14258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/functional-960066/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:58:07.060908   14258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/functional-960066/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:58:07.382661   14258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/functional-960066/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:58:08.024688   14258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/functional-960066/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:58:09.306625   14258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/functional-960066/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:58:11.868253   14258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/functional-960066/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:58:16.989937   14258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/functional-960066/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-156828 stop --alsologtostderr -v 5: (50.013393029s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 start --wait true --alsologtostderr -v 5
E1126 19:58:27.231908   14258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/functional-960066/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:58:29.819518   14258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:58:47.713949   14258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/functional-960066/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-156828 start --wait true --alsologtostderr -v 5: (56.283694395s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (106.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 node delete m03 --alsologtostderr -v 5
E1126 19:59:28.676445   14258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/functional-960066/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-156828 node delete m03 --alsologtostderr -v 5: (9.64072123s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (43.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-156828 stop --alsologtostderr -v 5: (43.72952713s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-156828 status --alsologtostderr -v 5: exit status 7 (110.343606ms)

                                                
                                                
-- stdout --
	ha-156828
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-156828-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-156828-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1126 20:00:18.064808   92829 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:00:18.065062   92829 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:00:18.065071   92829 out.go:374] Setting ErrFile to fd 2...
	I1126 20:00:18.065075   92829 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:00:18.065267   92829 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-10722/.minikube/bin
	I1126 20:00:18.065429   92829 out.go:368] Setting JSON to false
	I1126 20:00:18.065454   92829 mustload.go:66] Loading cluster: ha-156828
	I1126 20:00:18.065591   92829 notify.go:221] Checking for updates...
	I1126 20:00:18.065945   92829 config.go:182] Loaded profile config "ha-156828": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:00:18.065968   92829 status.go:174] checking status of ha-156828 ...
	I1126 20:00:18.066536   92829 cli_runner.go:164] Run: docker container inspect ha-156828 --format={{.State.Status}}
	I1126 20:00:18.083997   92829 status.go:371] ha-156828 host status = "Stopped" (err=<nil>)
	I1126 20:00:18.084011   92829 status.go:384] host is not running, skipping remaining checks
	I1126 20:00:18.084016   92829 status.go:176] ha-156828 status: &{Name:ha-156828 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1126 20:00:18.084044   92829 status.go:174] checking status of ha-156828-m02 ...
	I1126 20:00:18.084269   92829 cli_runner.go:164] Run: docker container inspect ha-156828-m02 --format={{.State.Status}}
	I1126 20:00:18.102219   92829 status.go:371] ha-156828-m02 host status = "Stopped" (err=<nil>)
	I1126 20:00:18.102234   92829 status.go:384] host is not running, skipping remaining checks
	I1126 20:00:18.102239   92829 status.go:176] ha-156828-m02 status: &{Name:ha-156828-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1126 20:00:18.102256   92829 status.go:174] checking status of ha-156828-m04 ...
	I1126 20:00:18.102536   92829 cli_runner.go:164] Run: docker container inspect ha-156828-m04 --format={{.State.Status}}
	I1126 20:00:18.120000   92829 status.go:371] ha-156828-m04 host status = "Stopped" (err=<nil>)
	I1126 20:00:18.120017   92829 status.go:384] host is not running, skipping remaining checks
	I1126 20:00:18.120021   92829 status.go:176] ha-156828-m04 status: &{Name:ha-156828-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (43.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (56.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1126 20:00:50.598522   14258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/functional-960066/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-156828 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (55.477456826s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (56.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (70.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 node add --control-plane --alsologtostderr -v 5
E1126 20:02:06.753975   14258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-156828 node add --control-plane --alsologtostderr -v 5: (1m9.443211827s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-156828 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (70.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.86s)

                                                
                                    
x
+
TestJSONOutput/start/Command (67.55s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-251841 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1126 20:03:06.738569   14258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/functional-960066/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:03:34.446865   14258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/functional-960066/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-251841 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m7.554263915s)
--- PASS: TestJSONOutput/start/Command (67.55s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.04s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-251841 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-251841 --output=json --user=testUser: (6.040200693s)
--- PASS: TestJSONOutput/stop/Command (6.04s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-729305 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-729305 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (72.095297ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a1def0fc-7169-4ca6-8d00-0ff5a663a2e9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-729305] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4990cbe3-7fdb-4daf-8112-be20e3db22a6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21974"}}
	{"specversion":"1.0","id":"53e9f52f-4a6f-402a-a327-fbc098574b2b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e5481320-301f-4d11-8b09-60ab7020ccd2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21974-10722/kubeconfig"}}
	{"specversion":"1.0","id":"2889f1cf-5db2-4600-a5c4-74ca9941ccb2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-10722/.minikube"}}
	{"specversion":"1.0","id":"aa092449-f51f-4436-8d4a-137769f5c0ca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"eae4ce90-b5fc-4c7d-a183-1726ceed809b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"05a8f4d8-c0ac-433b-8e3c-1754c4af5ab2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-729305" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-729305
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (26.3s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-018498 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-018498 --network=: (24.206160048s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-018498" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-018498
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-018498: (2.076417046s)
--- PASS: TestKicCustomNetwork/create_custom_network (26.30s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (22.16s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-449933 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-449933 --network=bridge: (20.190411624s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-449933" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-449933
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-449933: (1.946578522s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (22.16s)

                                                
                                    
x
+
TestKicExistingNetwork (23.42s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1126 20:04:45.744624   14258 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1126 20:04:45.760537   14258 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1126 20:04:45.760604   14258 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1126 20:04:45.760621   14258 cli_runner.go:164] Run: docker network inspect existing-network
W1126 20:04:45.775481   14258 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1126 20:04:45.775516   14258 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1126 20:04:45.775530   14258 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1126 20:04:45.775676   14258 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1126 20:04:45.791531   14258 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9f2fbfec5d4b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:aa:65:0e:d3:0d:13} reservation:<nil>}
I1126 20:04:45.791963   14258 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001b55210}
I1126 20:04:45.791990   14258 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1126 20:04:45.792042   14258 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1126 20:04:45.834508   14258 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-529226 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-529226 --network=existing-network: (21.348516001s)
helpers_test.go:175: Cleaning up "existing-network-529226" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-529226
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-529226: (1.948993213s)
I1126 20:05:09.149148   14258 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (23.42s)

                                                
                                    
x
+
TestKicCustomSubnet (23.71s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-740132 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-740132 --subnet=192.168.60.0/24: (21.604604794s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-740132 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-740132" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-740132
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-740132: (2.089976062s)
--- PASS: TestKicCustomSubnet (23.71s)

                                                
                                    
x
+
TestKicStaticIP (25.38s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-865722 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-865722 --static-ip=192.168.200.200: (23.131882244s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-865722 ip
helpers_test.go:175: Cleaning up "static-ip-865722" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-865722
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-865722: (2.109352853s)
--- PASS: TestKicStaticIP (25.38s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (47.6s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-003781 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-003781 --driver=docker  --container-runtime=crio: (18.723910192s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-006876 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-006876 --driver=docker  --container-runtime=crio: (23.11856151s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-003781
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-006876
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-006876" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-006876
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-006876: (2.288174012s)
helpers_test.go:175: Cleaning up "first-003781" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-003781
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-003781: (2.284106953s)
--- PASS: TestMinikubeProfile (47.60s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (4.66s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-741537 --memory=3072 --mount-string /tmp/TestMountStartserial3365579654/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-741537 --memory=3072 --mount-string /tmp/TestMountStartserial3365579654/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (3.657470244s)
--- PASS: TestMountStart/serial/StartWithMountFirst (4.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-741537 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.61s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-752352 --memory=3072 --mount-string /tmp/TestMountStartserial3365579654/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-752352 --memory=3072 --mount-string /tmp/TestMountStartserial3365579654/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.606519886s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.61s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-752352 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.65s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-741537 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-741537 --alsologtostderr -v=5: (1.646971212s)
--- PASS: TestMountStart/serial/DeleteFirst (1.65s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-752352 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-752352
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-752352: (1.248042763s)
--- PASS: TestMountStart/serial/Stop (1.25s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.26s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-752352
E1126 20:07:06.753832   14258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-752352: (6.256431668s)
--- PASS: TestMountStart/serial/RestartStopped (7.26s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-752352 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (91.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-939126 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1126 20:08:06.731521   14258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/functional-960066/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-939126 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m31.226153859s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-939126 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (91.69s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-939126 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-939126 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-939126 -- rollout status deployment/busybox: (2.066317953s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-939126 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-939126 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-939126 -- exec busybox-7b57f96db7-jd5fs -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-939126 -- exec busybox-7b57f96db7-m4bs8 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-939126 -- exec busybox-7b57f96db7-jd5fs -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-939126 -- exec busybox-7b57f96db7-m4bs8 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-939126 -- exec busybox-7b57f96db7-jd5fs -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-939126 -- exec busybox-7b57f96db7-m4bs8 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.44s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-939126 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-939126 -- exec busybox-7b57f96db7-jd5fs -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-939126 -- exec busybox-7b57f96db7-jd5fs -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-939126 -- exec busybox-7b57f96db7-m4bs8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-939126 -- exec busybox-7b57f96db7-m4bs8 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.68s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (22.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-939126 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-939126 -v=5 --alsologtostderr: (21.700516209s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-939126 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (22.33s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-939126 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.63s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-939126 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-939126 cp testdata/cp-test.txt multinode-939126:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-939126 ssh -n multinode-939126 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-939126 cp multinode-939126:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3022102751/001/cp-test_multinode-939126.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-939126 ssh -n multinode-939126 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-939126 cp multinode-939126:/home/docker/cp-test.txt multinode-939126-m02:/home/docker/cp-test_multinode-939126_multinode-939126-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-939126 ssh -n multinode-939126 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-939126 ssh -n multinode-939126-m02 "sudo cat /home/docker/cp-test_multinode-939126_multinode-939126-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-939126 cp multinode-939126:/home/docker/cp-test.txt multinode-939126-m03:/home/docker/cp-test_multinode-939126_multinode-939126-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-939126 ssh -n multinode-939126 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-939126 ssh -n multinode-939126-m03 "sudo cat /home/docker/cp-test_multinode-939126_multinode-939126-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-939126 cp testdata/cp-test.txt multinode-939126-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-939126 ssh -n multinode-939126-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-939126 cp multinode-939126-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3022102751/001/cp-test_multinode-939126-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-939126 ssh -n multinode-939126-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-939126 cp multinode-939126-m02:/home/docker/cp-test.txt multinode-939126:/home/docker/cp-test_multinode-939126-m02_multinode-939126.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-939126 ssh -n multinode-939126-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-939126 ssh -n multinode-939126 "sudo cat /home/docker/cp-test_multinode-939126-m02_multinode-939126.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-939126 cp multinode-939126-m02:/home/docker/cp-test.txt multinode-939126-m03:/home/docker/cp-test_multinode-939126-m02_multinode-939126-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-939126 ssh -n multinode-939126-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-939126 ssh -n multinode-939126-m03 "sudo cat /home/docker/cp-test_multinode-939126-m02_multinode-939126-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-939126 cp testdata/cp-test.txt multinode-939126-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-939126 ssh -n multinode-939126-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-939126 cp multinode-939126-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3022102751/001/cp-test_multinode-939126-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-939126 ssh -n multinode-939126-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-939126 cp multinode-939126-m03:/home/docker/cp-test.txt multinode-939126:/home/docker/cp-test_multinode-939126-m03_multinode-939126.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-939126 ssh -n multinode-939126-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-939126 ssh -n multinode-939126 "sudo cat /home/docker/cp-test_multinode-939126-m03_multinode-939126.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-939126 cp multinode-939126-m03:/home/docker/cp-test.txt multinode-939126-m02:/home/docker/cp-test_multinode-939126-m03_multinode-939126-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-939126 ssh -n multinode-939126-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-939126 ssh -n multinode-939126-m02 "sudo cat /home/docker/cp-test_multinode-939126-m03_multinode-939126-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.46s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-939126 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-939126 node stop m03: (1.241020744s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-939126 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-939126 status: exit status 7 (473.257734ms)

                                                
                                                
-- stdout --
	multinode-939126
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-939126-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-939126-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-939126 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-939126 status --alsologtostderr: exit status 7 (476.591231ms)

                                                
                                                
-- stdout --
	multinode-939126
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-939126-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-939126-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1126 20:09:21.275301  152715 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:09:21.275398  152715 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:09:21.275409  152715 out.go:374] Setting ErrFile to fd 2...
	I1126 20:09:21.275415  152715 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:09:21.275660  152715 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-10722/.minikube/bin
	I1126 20:09:21.275826  152715 out.go:368] Setting JSON to false
	I1126 20:09:21.275849  152715 mustload.go:66] Loading cluster: multinode-939126
	I1126 20:09:21.275961  152715 notify.go:221] Checking for updates...
	I1126 20:09:21.276168  152715 config.go:182] Loaded profile config "multinode-939126": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:09:21.276181  152715 status.go:174] checking status of multinode-939126 ...
	I1126 20:09:21.276646  152715 cli_runner.go:164] Run: docker container inspect multinode-939126 --format={{.State.Status}}
	I1126 20:09:21.294849  152715 status.go:371] multinode-939126 host status = "Running" (err=<nil>)
	I1126 20:09:21.294869  152715 host.go:66] Checking if "multinode-939126" exists ...
	I1126 20:09:21.295103  152715 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-939126
	I1126 20:09:21.311374  152715 host.go:66] Checking if "multinode-939126" exists ...
	I1126 20:09:21.311613  152715 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1126 20:09:21.311655  152715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-939126
	I1126 20:09:21.327506  152715 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/multinode-939126/id_rsa Username:docker}
	I1126 20:09:21.422431  152715 ssh_runner.go:195] Run: systemctl --version
	I1126 20:09:21.428183  152715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:09:21.439568  152715 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:09:21.494593  152715 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2025-11-26 20:09:21.485610362 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1126 20:09:21.495115  152715 kubeconfig.go:125] found "multinode-939126" server: "https://192.168.67.2:8443"
	I1126 20:09:21.495152  152715 api_server.go:166] Checking apiserver status ...
	I1126 20:09:21.495191  152715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:21.506706  152715 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1239/cgroup
	W1126 20:09:21.514692  152715 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1239/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1126 20:09:21.514739  152715 ssh_runner.go:195] Run: ls
	I1126 20:09:21.518071  152715 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1126 20:09:21.521925  152715 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1126 20:09:21.521945  152715 status.go:463] multinode-939126 apiserver status = Running (err=<nil>)
	I1126 20:09:21.521955  152715 status.go:176] multinode-939126 status: &{Name:multinode-939126 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1126 20:09:21.521974  152715 status.go:174] checking status of multinode-939126-m02 ...
	I1126 20:09:21.522213  152715 cli_runner.go:164] Run: docker container inspect multinode-939126-m02 --format={{.State.Status}}
	I1126 20:09:21.538846  152715 status.go:371] multinode-939126-m02 host status = "Running" (err=<nil>)
	I1126 20:09:21.538861  152715 host.go:66] Checking if "multinode-939126-m02" exists ...
	I1126 20:09:21.539135  152715 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-939126-m02
	I1126 20:09:21.555564  152715 host.go:66] Checking if "multinode-939126-m02" exists ...
	I1126 20:09:21.555767  152715 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1126 20:09:21.555797  152715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-939126-m02
	I1126 20:09:21.573172  152715 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21974-10722/.minikube/machines/multinode-939126-m02/id_rsa Username:docker}
	I1126 20:09:21.667326  152715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:09:21.678827  152715 status.go:176] multinode-939126-m02 status: &{Name:multinode-939126-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1126 20:09:21.678859  152715 status.go:174] checking status of multinode-939126-m03 ...
	I1126 20:09:21.679100  152715 cli_runner.go:164] Run: docker container inspect multinode-939126-m03 --format={{.State.Status}}
	I1126 20:09:21.695897  152715 status.go:371] multinode-939126-m03 host status = "Stopped" (err=<nil>)
	I1126 20:09:21.695913  152715 status.go:384] host is not running, skipping remaining checks
	I1126 20:09:21.695925  152715 status.go:176] multinode-939126-m03 status: &{Name:multinode-939126-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.19s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (6.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-939126 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-939126 node start m03 -v=5 --alsologtostderr: (6.309341582s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-939126 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (6.99s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (56.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-939126
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-939126
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-939126: (29.448877891s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-939126 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-939126 --wait=true -v=5 --alsologtostderr: (27.072814465s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-939126
--- PASS: TestMultiNode/serial/RestartKeepsNodes (56.64s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-939126 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-939126 node delete m03: (4.362486481s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-939126 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.94s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (28.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-939126 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-939126 stop: (28.375220128s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-939126 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-939126 status: exit status 7 (92.56361ms)

                                                
                                                
-- stdout --
	multinode-939126
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-939126-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-939126 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-939126 status --alsologtostderr: exit status 7 (91.484414ms)

                                                
                                                
-- stdout --
	multinode-939126
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-939126-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1126 20:10:58.784863  162210 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:10:58.785086  162210 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:10:58.785094  162210 out.go:374] Setting ErrFile to fd 2...
	I1126 20:10:58.785097  162210 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:10:58.785267  162210 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-10722/.minikube/bin
	I1126 20:10:58.785407  162210 out.go:368] Setting JSON to false
	I1126 20:10:58.785430  162210 mustload.go:66] Loading cluster: multinode-939126
	I1126 20:10:58.785527  162210 notify.go:221] Checking for updates...
	I1126 20:10:58.785771  162210 config.go:182] Loaded profile config "multinode-939126": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:10:58.785783  162210 status.go:174] checking status of multinode-939126 ...
	I1126 20:10:58.786185  162210 cli_runner.go:164] Run: docker container inspect multinode-939126 --format={{.State.Status}}
	I1126 20:10:58.804676  162210 status.go:371] multinode-939126 host status = "Stopped" (err=<nil>)
	I1126 20:10:58.804702  162210 status.go:384] host is not running, skipping remaining checks
	I1126 20:10:58.804717  162210 status.go:176] multinode-939126 status: &{Name:multinode-939126 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1126 20:10:58.804773  162210 status.go:174] checking status of multinode-939126-m02 ...
	I1126 20:10:58.805011  162210 cli_runner.go:164] Run: docker container inspect multinode-939126-m02 --format={{.State.Status}}
	I1126 20:10:58.821844  162210 status.go:371] multinode-939126-m02 host status = "Stopped" (err=<nil>)
	I1126 20:10:58.821858  162210 status.go:384] host is not running, skipping remaining checks
	I1126 20:10:58.821863  162210 status.go:176] multinode-939126-m02 status: &{Name:multinode-939126-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (28.56s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (48.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-939126 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-939126 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (47.782228805s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-939126 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (48.36s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (26.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-939126
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-939126-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-939126-m02 --driver=docker  --container-runtime=crio: exit status 14 (69.975276ms)

                                                
                                                
-- stdout --
	* [multinode-939126-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21974
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21974-10722/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-10722/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-939126-m02' is duplicated with machine name 'multinode-939126-m02' in profile 'multinode-939126'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-939126-m03 --driver=docker  --container-runtime=crio
E1126 20:12:06.753779   14258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-939126-m03 --driver=docker  --container-runtime=crio: (23.768625946s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-939126
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-939126: exit status 80 (285.21213ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-939126 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-939126-m03 already exists in multinode-939126-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-939126-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-939126-m03: (2.313353545s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (26.49s)

                                                
                                    
x
+
TestPreload (108.93s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-972592 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio
E1126 20:13:06.729842   14258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/functional-960066/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-972592 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio: (49.382775986s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-972592 image pull gcr.io/k8s-minikube/busybox
preload_test.go:55: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-972592
preload_test.go:55: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-972592: (7.96609271s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-972592 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-972592 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (48.184191536s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-972592 image list
helpers_test.go:175: Cleaning up "test-preload-972592" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-972592
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-972592: (2.330995209s)
--- PASS: TestPreload (108.93s)

                                                
                                    
x
+
TestScheduledStopUnix (95.37s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-926822 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-926822 --memory=3072 --driver=docker  --container-runtime=crio: (18.953634559s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-926822 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1126 20:14:25.708484  179344 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:14:25.708706  179344 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:14:25.708715  179344 out.go:374] Setting ErrFile to fd 2...
	I1126 20:14:25.708718  179344 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:14:25.708906  179344 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-10722/.minikube/bin
	I1126 20:14:25.709105  179344 out.go:368] Setting JSON to false
	I1126 20:14:25.709187  179344 mustload.go:66] Loading cluster: scheduled-stop-926822
	I1126 20:14:25.709448  179344 config.go:182] Loaded profile config "scheduled-stop-926822": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:14:25.709518  179344 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/scheduled-stop-926822/config.json ...
	I1126 20:14:25.709688  179344 mustload.go:66] Loading cluster: scheduled-stop-926822
	I1126 20:14:25.709783  179344 config.go:182] Loaded profile config "scheduled-stop-926822": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-926822 -n scheduled-stop-926822
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-926822 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1126 20:14:26.081137  179501 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:14:26.081394  179501 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:14:26.081403  179501 out.go:374] Setting ErrFile to fd 2...
	I1126 20:14:26.081408  179501 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:14:26.081630  179501 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-10722/.minikube/bin
	I1126 20:14:26.081862  179501 out.go:368] Setting JSON to false
	I1126 20:14:26.082029  179501 daemonize_unix.go:73] killing process 179383 as it is an old scheduled stop
	I1126 20:14:26.082135  179501 mustload.go:66] Loading cluster: scheduled-stop-926822
	I1126 20:14:26.082512  179501 config.go:182] Loaded profile config "scheduled-stop-926822": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:14:26.082637  179501 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/scheduled-stop-926822/config.json ...
	I1126 20:14:26.082860  179501 mustload.go:66] Loading cluster: scheduled-stop-926822
	I1126 20:14:26.082962  179501 config.go:182] Loaded profile config "scheduled-stop-926822": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1126 20:14:26.087936   14258 retry.go:31] will retry after 108.64µs: open /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/scheduled-stop-926822/pid: no such file or directory
I1126 20:14:26.089067   14258 retry.go:31] will retry after 203.604µs: open /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/scheduled-stop-926822/pid: no such file or directory
I1126 20:14:26.090207   14258 retry.go:31] will retry after 311.13µs: open /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/scheduled-stop-926822/pid: no such file or directory
I1126 20:14:26.091340   14258 retry.go:31] will retry after 361.99µs: open /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/scheduled-stop-926822/pid: no such file or directory
I1126 20:14:26.092486   14258 retry.go:31] will retry after 260.557µs: open /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/scheduled-stop-926822/pid: no such file or directory
I1126 20:14:26.093618   14258 retry.go:31] will retry after 380.107µs: open /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/scheduled-stop-926822/pid: no such file or directory
I1126 20:14:26.094742   14258 retry.go:31] will retry after 1.502819ms: open /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/scheduled-stop-926822/pid: no such file or directory
I1126 20:14:26.096929   14258 retry.go:31] will retry after 1.92126ms: open /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/scheduled-stop-926822/pid: no such file or directory
I1126 20:14:26.099122   14258 retry.go:31] will retry after 3.047557ms: open /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/scheduled-stop-926822/pid: no such file or directory
I1126 20:14:26.102256   14258 retry.go:31] will retry after 4.253865ms: open /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/scheduled-stop-926822/pid: no such file or directory
I1126 20:14:26.107522   14258 retry.go:31] will retry after 5.534725ms: open /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/scheduled-stop-926822/pid: no such file or directory
I1126 20:14:26.113722   14258 retry.go:31] will retry after 5.217198ms: open /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/scheduled-stop-926822/pid: no such file or directory
I1126 20:14:26.119927   14258 retry.go:31] will retry after 15.684709ms: open /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/scheduled-stop-926822/pid: no such file or directory
I1126 20:14:26.136129   14258 retry.go:31] will retry after 14.132786ms: open /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/scheduled-stop-926822/pid: no such file or directory
I1126 20:14:26.151289   14258 retry.go:31] will retry after 23.000543ms: open /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/scheduled-stop-926822/pid: no such file or directory
I1126 20:14:26.174545   14258 retry.go:31] will retry after 53.867282ms: open /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/scheduled-stop-926822/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-926822 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
E1126 20:14:29.808128   14258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/functional-960066/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-926822 -n scheduled-stop-926822
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-926822
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-926822 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1126 20:14:51.955911  180139 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:14:51.955996  180139 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:14:51.956004  180139 out.go:374] Setting ErrFile to fd 2...
	I1126 20:14:51.956008  180139 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:14:51.956172  180139 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-10722/.minikube/bin
	I1126 20:14:51.956381  180139 out.go:368] Setting JSON to false
	I1126 20:14:51.956451  180139 mustload.go:66] Loading cluster: scheduled-stop-926822
	I1126 20:14:51.956736  180139 config.go:182] Loaded profile config "scheduled-stop-926822": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:14:51.956806  180139 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/scheduled-stop-926822/config.json ...
	I1126 20:14:51.957015  180139 mustload.go:66] Loading cluster: scheduled-stop-926822
	I1126 20:14:51.957115  180139 config.go:182] Loaded profile config "scheduled-stop-926822": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
E1126 20:15:09.822807   14258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-926822
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-926822: exit status 7 (74.558285ms)

                                                
                                                
-- stdout --
	scheduled-stop-926822
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-926822 -n scheduled-stop-926822
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-926822 -n scheduled-stop-926822: exit status 7 (73.847425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-926822" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-926822
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-926822: (4.962917944s)
--- PASS: TestScheduledStopUnix (95.37s)

                                                
                                    
x
+
TestInsufficientStorage (9.22s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-946161 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-946161 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (6.799423484s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5cffc475-f91b-421b-89e0-d676c7e76a34","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-946161] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"256eb097-afd2-4dd3-8be9-ac3036e495ef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21974"}}
	{"specversion":"1.0","id":"48b7a569-32be-4ca7-9044-ba834630f59e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"40505cc7-c026-42ad-84f2-62da9f861e5c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21974-10722/kubeconfig"}}
	{"specversion":"1.0","id":"2fa8cafd-e4d3-41ea-b02e-d5aa865d8066","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-10722/.minikube"}}
	{"specversion":"1.0","id":"9774f76c-58fd-46ae-aae7-1e5b4b588884","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"1a236265-4147-4aff-8f9c-c1fc3511e210","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"af865c26-e3e4-479e-8ad2-23706a9cba31","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"c8d35437-e639-4cef-9457-c0b543d8e7e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"ed27e3c3-e41c-4aca-ba14-c38375cad7ce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"12584cd9-26d3-4454-9641-fa9845bd6929","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"291c123c-d4f8-4857-b88a-71592f5d71da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-946161\" primary control-plane node in \"insufficient-storage-946161\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"93080b6d-0348-45b9-ab78-98104583475e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1764169655-21974 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"75c3d8e7-aebb-4446-af4f-a3e5fc28c020","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"5fff143b-52cc-4838-a0c6-d98fe4208da4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-946161 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-946161 --output=json --layout=cluster: exit status 7 (282.04757ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-946161","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-946161","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1126 20:15:49.142036  182680 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-946161" does not appear in /home/jenkins/minikube-integration/21974-10722/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-946161 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-946161 --output=json --layout=cluster: exit status 7 (279.337636ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-946161","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-946161","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1126 20:15:49.422408  182791 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-946161" does not appear in /home/jenkins/minikube-integration/21974-10722/kubeconfig
	E1126 20:15:49.432542  182791 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/insufficient-storage-946161/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-946161" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-946161
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-946161: (1.860067499s)
--- PASS: TestInsufficientStorage (9.22s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (47.67s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.628005722 start -p running-upgrade-612362 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.628005722 start -p running-upgrade-612362 --memory=3072 --vm-driver=docker  --container-runtime=crio: (23.465627858s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-612362 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-612362 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (20.958117894s)
helpers_test.go:175: Cleaning up "running-upgrade-612362" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-612362
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-612362: (2.490014044s)
--- PASS: TestRunningBinaryUpgrade (47.67s)

                                                
                                    
x
+
TestKubernetesUpgrade (300.83s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-225144 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1126 20:17:06.754227   14258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-225144 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (21.699905928s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-225144
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-225144: (4.285228698s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-225144 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-225144 status --format={{.Host}}: exit status 7 (99.616525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-225144 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-225144 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m26.897767617s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-225144 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-225144 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-225144 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (75.292737ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-225144] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21974
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21974-10722/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-10722/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-225144
	    minikube start -p kubernetes-upgrade-225144 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2251442 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-225144 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-225144 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-225144 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (5.257669663s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-225144" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-225144
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-225144: (2.460774809s)
--- PASS: TestKubernetesUpgrade (300.83s)

                                                
                                    
x
+
TestMissingContainerUpgrade (88.38s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.4028746844 start -p missing-upgrade-521324 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.4028746844 start -p missing-upgrade-521324 --memory=3072 --driver=docker  --container-runtime=crio: (34.735358778s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-521324
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-521324: (10.47563621s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-521324
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-521324 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-521324 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (40.103284669s)
helpers_test.go:175: Cleaning up "missing-upgrade-521324" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-521324
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-521324: (2.376831121s)
--- PASS: TestMissingContainerUpgrade (88.38s)

                                                
                                    
x
+
TestPause/serial/Start (77.98s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-088343 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-088343 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m17.977210633s)
--- PASS: TestPause/serial/Start (77.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-237154 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-237154 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (99.446499ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-237154] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21974
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21974-10722/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-10722/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (33.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-237154 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-237154 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (32.958504602s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-237154 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (33.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (23.64s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-237154 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-237154 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (19.944291483s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-237154 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-237154 status -o json: exit status 2 (313.683864ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-237154","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-237154
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-237154: (3.382737519s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (23.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (4.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-237154 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-237154 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (4.18397033s)
--- PASS: TestNoKubernetes/serial/Start (4.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21974-10722/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-237154 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-237154 "sudo systemctl is-active --quiet service kubelet": exit status 1 (275.943474ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-237154
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-237154: (1.270468092s)
--- PASS: TestNoKubernetes/serial/Stop (1.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-237154 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-237154 --driver=docker  --container-runtime=crio: (6.574491361s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-237154 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-237154 "sudo systemctl is-active --quiet service kubelet": exit status 1 (277.844239ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.27s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-088343 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-088343 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (7.258136338s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.67s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.67s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (288.68s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.1445870266 start -p stopped-upgrade-211103 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.1445870266 start -p stopped-upgrade-211103 --memory=3072 --vm-driver=docker  --container-runtime=crio: (24.791708994s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.1445870266 -p stopped-upgrade-211103 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.1445870266 -p stopped-upgrade-211103 stop: (1.358268863s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-211103 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-211103 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m22.525238828s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (288.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-825702 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-825702 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (163.366504ms)

                                                
                                                
-- stdout --
	* [false-825702] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21974
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21974-10722/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-10722/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1126 20:18:40.683870  230069 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:18:40.684137  230069 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:18:40.684146  230069 out.go:374] Setting ErrFile to fd 2...
	I1126 20:18:40.684153  230069 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:18:40.684353  230069 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-10722/.minikube/bin
	I1126 20:18:40.684821  230069 out.go:368] Setting JSON to false
	I1126 20:18:40.685982  230069 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3671,"bootTime":1764184650,"procs":372,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1126 20:18:40.686034  230069 start.go:143] virtualization: kvm guest
	I1126 20:18:40.688006  230069 out.go:179] * [false-825702] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1126 20:18:40.689274  230069 notify.go:221] Checking for updates...
	I1126 20:18:40.689410  230069 out.go:179]   - MINIKUBE_LOCATION=21974
	I1126 20:18:40.690840  230069 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1126 20:18:40.694017  230069 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21974-10722/kubeconfig
	I1126 20:18:40.695602  230069 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-10722/.minikube
	I1126 20:18:40.697011  230069 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1126 20:18:40.698368  230069 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1126 20:18:40.700013  230069 config.go:182] Loaded profile config "cert-expiration-571738": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:18:40.700171  230069 config.go:182] Loaded profile config "kubernetes-upgrade-225144": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:18:40.700288  230069 config.go:182] Loaded profile config "stopped-upgrade-211103": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1126 20:18:40.700386  230069 driver.go:422] Setting default libvirt URI to qemu:///system
	I1126 20:18:40.724382  230069 docker.go:124] docker version: linux-29.0.4:Docker Engine - Community
	I1126 20:18:40.724515  230069 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:18:40.782622  230069 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-26 20:18:40.773029887 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1126 20:18:40.782716  230069 docker.go:319] overlay module found
	I1126 20:18:40.786175  230069 out.go:179] * Using the docker driver based on user configuration
	I1126 20:18:40.787543  230069 start.go:309] selected driver: docker
	I1126 20:18:40.787558  230069 start.go:927] validating driver "docker" against <nil>
	I1126 20:18:40.787569  230069 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1126 20:18:40.789270  230069 out.go:203] 
	W1126 20:18:40.790402  230069 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1126 20:18:40.791565  230069 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-825702 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-825702

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-825702

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-825702

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-825702

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-825702

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-825702

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-825702

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-825702

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-825702

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-825702

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-825702"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-825702"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-825702"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-825702

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-825702"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-825702"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-825702" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-825702" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-825702" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-825702" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-825702" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-825702" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-825702" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-825702" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-825702"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-825702"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-825702"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-825702"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-825702"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-825702" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-825702" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-825702" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-825702"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-825702"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-825702"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-825702"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-825702"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21974-10722/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 26 Nov 2025 20:18:15 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: cert-expiration-571738
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21974-10722/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 26 Nov 2025 20:17:43 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: kubernetes-upgrade-225144
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21974-10722/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 26 Nov 2025 20:17:46 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: stopped-upgrade-211103
contexts:
- context:
cluster: cert-expiration-571738
extensions:
- extension:
last-update: Wed, 26 Nov 2025 20:18:15 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-571738
name: cert-expiration-571738
- context:
cluster: kubernetes-upgrade-225144
user: kubernetes-upgrade-225144
name: kubernetes-upgrade-225144
- context:
cluster: stopped-upgrade-211103
user: stopped-upgrade-211103
name: stopped-upgrade-211103
current-context: ""
kind: Config
users:
- name: cert-expiration-571738
user:
client-certificate: /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/cert-expiration-571738/client.crt
client-key: /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/cert-expiration-571738/client.key
- name: kubernetes-upgrade-225144
user:
client-certificate: /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kubernetes-upgrade-225144/client.crt
client-key: /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kubernetes-upgrade-225144/client.key
- name: stopped-upgrade-211103
user:
client-certificate: /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/stopped-upgrade-211103/client.crt
client-key: /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/stopped-upgrade-211103/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-825702

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-825702"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-825702"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-825702"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-825702"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-825702"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-825702"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-825702"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-825702"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-825702"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-825702"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-825702"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-825702"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-825702"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-825702"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-825702"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-825702"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-825702"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-825702"

                                                
                                                
----------------------- debugLogs end: false-825702 [took: 3.162773649s] --------------------------------
helpers_test.go:175: Cleaning up "false-825702" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-825702
--- PASS: TestNetworkPlugins/group/false (3.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (49.65s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-157431 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-157431 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (49.651058552s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (49.65s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-157431 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [d6c41f35-cc7b-423c-b8e2-76531e7a8b3b] Pending
helpers_test.go:352: "busybox" [d6c41f35-cc7b-423c-b8e2-76531e7a8b3b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [d6c41f35-cc7b-423c-b8e2-76531e7a8b3b] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.003068334s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-157431 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (16.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-157431 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-157431 --alsologtostderr -v=3: (16.108796874s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (16.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-157431 -n old-k8s-version-157431
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-157431 -n old-k8s-version-157431: exit status 7 (74.614327ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-157431 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (26.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-157431 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-157431 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (25.774370309s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-157431 -n old-k8s-version-157431
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (26.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-j28gs" [bcb842e0-68ab-415a-9899-b57f19282469] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-j28gs" [bcb842e0-68ab-415a-9899-b57f19282469] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003973645s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-j28gs" [bcb842e0-68ab-415a-9899-b57f19282469] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003596822s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-157431 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-157431 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (51.72s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-026579 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-026579 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (51.720688019s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (51.72s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (43.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-949294 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-949294 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (43.975853377s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (43.98s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.04s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-211103
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-211103: (1.038713642s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (40.48s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-178152 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-178152 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (40.478950626s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (40.48s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-026579 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [4e7644bf-6a7c-407a-bcef-89fd47b6b2d5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [4e7644bf-6a7c-407a-bcef-89fd47b6b2d5] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004246994s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-026579 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (30.57s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-297942 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1126 20:22:06.753768   14258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/addons-368879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-297942 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (30.569824259s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (30.57s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-949294 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [bbd3f1ad-5639-44ac-bed1-8de1e6b81907] Pending
helpers_test.go:352: "busybox" [bbd3f1ad-5639-44ac-bed1-8de1e6b81907] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [bbd3f1ad-5639-44ac-bed1-8de1e6b81907] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004224103s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-949294 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (16.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-026579 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-026579 --alsologtostderr -v=3: (16.442055046s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (16.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (18.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-949294 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-949294 --alsologtostderr -v=3: (18.080029968s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (18.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-026579 -n no-preload-026579
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-026579 -n no-preload-026579: exit status 7 (80.463973ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-026579 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (47.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-026579 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-026579 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (47.508975514s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-026579 -n no-preload-026579
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (47.89s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-949294 -n embed-certs-949294
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-949294 -n embed-certs-949294: exit status 7 (93.683213ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-949294 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (47.55s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-949294 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-949294 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (47.161172304s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-949294 -n embed-certs-949294
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (47.55s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.52s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-297942 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-297942 --alsologtostderr -v=3: (2.51792231s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.52s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-297942 -n newest-cni-297942
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-297942 -n newest-cni-297942: exit status 7 (104.025243ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-297942 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (12.55s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-297942 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-297942 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (12.133199441s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-297942 -n newest-cni-297942
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (12.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-178152 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [784f93fd-b5f3-4353-977c-1c2395ef08b7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [784f93fd-b5f3-4353-977c-1c2395ef08b7] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 7.004227371s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-178152 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-297942 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (16.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-178152 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-178152 --alsologtostderr -v=3: (16.355196574s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (16.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (45.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-825702 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E1126 20:23:06.730809   14258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/functional-960066/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-825702 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (45.455774392s)
--- PASS: TestNetworkPlugins/group/auto/Start (45.46s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-178152 -n default-k8s-diff-port-178152
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-178152 -n default-k8s-diff-port-178152: exit status 7 (76.003843ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-178152 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (51.63s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-178152 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-178152 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (51.28237915s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-178152 -n default-k8s-diff-port-178152
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (51.63s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-vghzh" [f10f0676-c975-4be6-ba07-875959ef0cdc] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003293759s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-8dsr7" [29f8c956-87a0-470c-96c6-0a98928e135f] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00417327s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-vghzh" [f10f0676-c975-4be6-ba07-875959ef0cdc] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003610252s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-026579 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-026579 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-8dsr7" [29f8c956-87a0-470c-96c6-0a98928e135f] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.002835443s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-949294 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-949294 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (44.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-825702 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-825702 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (44.582307234s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (44.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (50.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-825702 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-825702 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (50.289578768s)
--- PASS: TestNetworkPlugins/group/calico/Start (50.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-825702 "pgrep -a kubelet"
I1126 20:23:48.843802   14258 config.go:182] Loaded profile config "auto-825702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-825702 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-kqn44" [4a92537c-0beb-47c6-81d0-5f857f9a2963] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-kqn44" [4a92537c-0beb-47c6-81d0-5f857f9a2963] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.003594485s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-825702 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-825702 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-825702 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-m2nr4" [31d1aad0-83f1-465f-88ca-6de709572587] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003192472s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-m2nr4" [31d1aad0-83f1-465f-88ca-6de709572587] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003884496s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-178152 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-178152 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (63.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-825702 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-825702 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m3.054927088s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (63.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-9fvfc" [25e55254-a68b-461e-a2ce-649c127cf1b3] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004000661s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (63.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-825702 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-825702 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m3.447307762s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (63.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-825702 "pgrep -a kubelet"
I1126 20:24:31.363575   14258 config.go:182] Loaded profile config "kindnet-825702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-825702 replace --force -f testdata/netcat-deployment.yaml
I1126 20:24:32.144215   14258 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I1126 20:24:32.204816   14258 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-29bhc" [d352ab1e-0e10-44fe-b958-49ff1d012af1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-29bhc" [d352ab1e-0e10-44fe-b958-49ff1d012af1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.003507879s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-7wv72" [179bfa91-c1e0-428e-a5a5-7d4df8b74615] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-7wv72" [179bfa91-c1e0-428e-a5a5-7d4df8b74615] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003525437s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-825702 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-825702 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-825702 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-825702 "pgrep -a kubelet"
I1126 20:24:43.740043   14258 config.go:182] Loaded profile config "calico-825702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-825702 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-zjkdh" [e84189bc-f6ee-4eeb-b459-1ac03181ba0f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-zjkdh" [e84189bc-f6ee-4eeb-b459-1ac03181ba0f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.004153337s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-825702 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-825702 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-825702 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (43.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-825702 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E1126 20:25:05.304337   14258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/old-k8s-version-157431/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:25:10.426019   14258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/old-k8s-version-157431/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-825702 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (43.366811989s)
--- PASS: TestNetworkPlugins/group/flannel/Start (43.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (39.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-825702 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E1126 20:25:20.667704   14258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/old-k8s-version-157431/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-825702 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (39.971221144s)
--- PASS: TestNetworkPlugins/group/bridge/Start (39.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-825702 "pgrep -a kubelet"
I1126 20:25:23.138814   14258 config.go:182] Loaded profile config "custom-flannel-825702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (8.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-825702 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-xsmxq" [5d0aa795-ed14-4ce1-87cb-b99cca73630d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-xsmxq" [5d0aa795-ed14-4ce1-87cb-b99cca73630d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 8.004495383s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (8.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-825702 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-825702 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-825702 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
I1126 20:25:31.528922   14258 config.go:182] Loaded profile config "enable-default-cni-825702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-825702 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-7w6k7" [dcbbfdae-4346-4c6e-a9f9-022a78bc2ad2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-7w6k7" [dcbbfdae-4346-4c6e-a9f9-022a78bc2ad2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.003626798s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-825702 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-825702 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-825702 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-825702 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-54pnl" [ff699b9c-eeaa-4b26-a226-20447c720fd7] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003372931s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-825702 "pgrep -a kubelet"
I1126 20:25:52.444128   14258 config.go:182] Loaded profile config "flannel-825702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-825702 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-pwvxs" [01c05b61-832d-4ef7-bde6-d96f8bb4ccc5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-pwvxs" [01c05b61-832d-4ef7-bde6-d96f8bb4ccc5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.003380843s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-825702 "pgrep -a kubelet"
I1126 20:25:53.986630   14258 config.go:182] Loaded profile config "bridge-825702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-825702 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-ch9vg" [1ccc2ded-7496-4378-87d7-996b13fc46e7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-ch9vg" [1ccc2ded-7496-4378-87d7-996b13fc46e7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003763692s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-825702 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-825702 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-825702 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-825702 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-825702 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-825702 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.08s)

                                                
                                    

Test skip (27/328)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-221304" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-221304
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-825702 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-825702

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-825702

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-825702

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-825702

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-825702

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-825702

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-825702

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-825702

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-825702

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-825702

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-825702"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-825702"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-825702"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-825702

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-825702"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-825702"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-825702" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-825702" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-825702" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-825702" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-825702" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-825702" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-825702" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-825702" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-825702"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-825702"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-825702"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-825702"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-825702"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-825702" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-825702" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-825702" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-825702"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-825702"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-825702"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-825702"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-825702"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21974-10722/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 26 Nov 2025 20:18:15 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: cert-expiration-571738
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21974-10722/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 26 Nov 2025 20:17:43 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: kubernetes-upgrade-225144
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21974-10722/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 26 Nov 2025 20:17:46 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: stopped-upgrade-211103
contexts:
- context:
cluster: cert-expiration-571738
extensions:
- extension:
last-update: Wed, 26 Nov 2025 20:18:15 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-571738
name: cert-expiration-571738
- context:
cluster: kubernetes-upgrade-225144
user: kubernetes-upgrade-225144
name: kubernetes-upgrade-225144
- context:
cluster: stopped-upgrade-211103
user: stopped-upgrade-211103
name: stopped-upgrade-211103
current-context: ""
kind: Config
users:
- name: cert-expiration-571738
user:
client-certificate: /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/cert-expiration-571738/client.crt
client-key: /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/cert-expiration-571738/client.key
- name: kubernetes-upgrade-225144
user:
client-certificate: /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kubernetes-upgrade-225144/client.crt
client-key: /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kubernetes-upgrade-225144/client.key
- name: stopped-upgrade-211103
user:
client-certificate: /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/stopped-upgrade-211103/client.crt
client-key: /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/stopped-upgrade-211103/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-825702

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-825702"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-825702"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-825702"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-825702"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-825702"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-825702"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-825702"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-825702"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-825702"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-825702"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-825702"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-825702"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-825702"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-825702"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-825702"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-825702"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-825702"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-825702"

                                                
                                                
----------------------- debugLogs end: kubenet-825702 [took: 3.010685524s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-825702" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-825702
--- SKIP: TestNetworkPlugins/group/kubenet (3.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-825702 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-825702

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-825702

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-825702

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-825702

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-825702

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-825702

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-825702

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-825702

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-825702

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-825702

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-825702"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-825702"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-825702"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-825702

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-825702"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-825702"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-825702" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-825702" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-825702" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-825702" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-825702" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-825702" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-825702" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-825702" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-825702"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-825702"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-825702"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-825702"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-825702"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-825702

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-825702

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-825702" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-825702" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-825702

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-825702

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-825702" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-825702" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-825702" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-825702" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-825702" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-825702"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-825702"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-825702"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-825702"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-825702"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21974-10722/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 26 Nov 2025 20:18:15 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: cert-expiration-571738
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21974-10722/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 26 Nov 2025 20:17:43 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: kubernetes-upgrade-225144
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21974-10722/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 26 Nov 2025 20:17:46 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: stopped-upgrade-211103
contexts:
- context:
cluster: cert-expiration-571738
extensions:
- extension:
last-update: Wed, 26 Nov 2025 20:18:15 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-571738
name: cert-expiration-571738
- context:
cluster: kubernetes-upgrade-225144
user: kubernetes-upgrade-225144
name: kubernetes-upgrade-225144
- context:
cluster: stopped-upgrade-211103
user: stopped-upgrade-211103
name: stopped-upgrade-211103
current-context: ""
kind: Config
users:
- name: cert-expiration-571738
user:
client-certificate: /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/cert-expiration-571738/client.crt
client-key: /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/cert-expiration-571738/client.key
- name: kubernetes-upgrade-225144
user:
client-certificate: /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kubernetes-upgrade-225144/client.crt
client-key: /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/kubernetes-upgrade-225144/client.key
- name: stopped-upgrade-211103
user:
client-certificate: /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/stopped-upgrade-211103/client.crt
client-key: /home/jenkins/minikube-integration/21974-10722/.minikube/profiles/stopped-upgrade-211103/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-825702

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-825702"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-825702"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-825702"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-825702"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-825702"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-825702"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-825702"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-825702"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-825702"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-825702"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-825702"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-825702"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-825702"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-825702"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-825702"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-825702"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-825702"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-825702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-825702"

                                                
                                                
----------------------- debugLogs end: cilium-825702 [took: 3.486946136s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-825702" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-825702
--- SKIP: TestNetworkPlugins/group/cilium (3.64s)

                                                
                                    
Copied to clipboard